22.01.2013 Views

Production Technology Seminar 2009 - EBU Technical

Production Technology Seminar 2009 - EBU Technical

Production Technology Seminar 2009 - EBU Technical

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

19<br />

� If there is any corruption on the Dolby E input stream, then it is quite difficult to determine how the<br />

decoded audio will sound, because the exact location of the error in the bit stream determines how the<br />

decoder behaviour results on the output audio – So in the worst case this may be a small glitch, in the<br />

best case it may be a 40 ms mute, but the decoder will do its best to try to conceal this error.<br />

One appropriate place to use Dolby E is in the contribution system for live/sports events [9]. Some<br />

consistent set-ups should ensure:<br />

� That the encoder is clocked to a synchronous reference.<br />

� The locking of the IRD to the incoming MPEG-2 Transport Stream making sure that the Programme<br />

Clock Reference is used as a basis by the IRD to decode.<br />

� The mapping of AES data into MPEG-2 TS 14 . SMPTE 302M specifies that each audio PES packet<br />

should last the same duration as one video PES packet. This is the case for interlaced 25 Hz (40 ms for<br />

video and audio) and for progressive 50Hz (20ms) and in some cases that does not cause a problem,<br />

but some IRDs try to make some realignment of the PES packets to time them to some local reference.<br />

� The encoder is clocked to input or synchronous reference. We suggest requesting that in the Dolby E<br />

contribution mode, the audio PES packets last 40 ms so they encapsulate complete Dolby E frames [9].<br />

After an ingest point or an IRD in a broadcast plant, frame synchronising can improve the robustness,<br />

by always dropping 2 video frames along with 1 Dolby E frame, andre-aligningE frames to the 25 Hz<br />

house reference signal [10].<br />

In the context of the video switching router, switch on 25 Hz frame boundaries or parse the Dolby E<br />

input to find the guard bands [11-left]. For editing: use 25 Hz rate, decode and encode via plug-ins, or<br />

use separate A/V edit points [11-right].<br />

If Dolby E is not practical: use discrete audio (e.g. embedded in HD-SDI), with a separate metadata<br />

channel; ensure metadata is carried throughout all equipment. Real time and file-based audio<br />

processors both require metadata. To get it, SMPTE RDD-6 describes how to transmit Dolby metadata<br />

on a real time serial protocol (via e.g. 9-pin RS-485) and SMPTE 2020 specifies the embedding of RDD-<br />

6 into HD-SDI VANC. In the file world, the 'dbmd chunk' allows to encapsulate Dolby metadata in a<br />

section of a .WAV header.<br />

Equipment for embedding and disembedding audio metadata (per SMPTE RDD6) in the VANC data<br />

space (per SMPTE S2020) is available 15 . Concerning SMPTE 2020, ensure that:<br />

� The Audio (discrete or embedded) / Video timing is preserved [14-right-up].<br />

� The metadata is timed correctly to the Audio it is describing. For example, in the case of a channel<br />

configuration change between a 5.1 service and a stereo service, you want to ensure that the home<br />

cinema loudspeakers will turn 'on' or 'off' along with the audio changes.<br />

� Channel allocation remains not undefined if metadata is erased [14-right-bottom]. What happens<br />

when the SMPTE 2020 embedder looses its serial metadata input – does it switch to an internal<br />

metadata preset?<br />

How the samples timing accuracy between discrete audio channels may affect audio? You may have to<br />

split 6 audio channels over 2 embedded HD-SDI groups, 4 in group 1 and 2 in group 2. What happens if<br />

these 2 groups are misaligned? [15]. If there is a similar audio content on all the channels (music/drama)<br />

all we get is one signal and a delayed version. And the downstream stereo down-mixes could sound<br />

“phasey” with a comb-filtering effect [15-right-bottom].<br />

For file-based applications the 'dbmd chunk' to encapsulate Dolby metadata in any .WAV is already<br />

implemented in some vendors' equipment software. It can be then re-encapsulated into MXF<br />

(SMPTE 382M) via WAV. In the future, XML schemas may be used in automation systems. Possible<br />

applications are: postproduction editing, into file-based processors; Dolby E file-based processors relying<br />

on dbmd chunk; interchange and delivery of Dolby metadata in files.<br />

14 Linear PCM or other audio/data (SMPTE 337M – Format for non-PCM Audio and Data in AES3 Serial Digital Audio Interface)<br />

15 Miranda, Evertz…<br />

© <strong>EBU</strong> <strong>2009</strong> / <strong>Production</strong> <strong>Technology</strong> seminar / January 27 - 29, <strong>2009</strong><br />

Reproduction prohibited without written permission of <strong>EBU</strong> TECHNICAL & <strong>EBU</strong> TRAINING

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!