28.04.2019 Views

[JAVA][Beginning Java 8 Games Development]

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

Chapter 5 ■ An Introduction to Game Design: Concepts, Multimedia, and Using Scene Builder<br />

frame in a video data stream. This is why a talking head video will encode better than a video in which every pixel<br />

moves on every frame (such as video that uses fast camera panning, or rapid field of view [FOV] zooming).<br />

A key frame is a setting in a codec that forces that codec to take a fresh sampling of your video data assets every<br />

so often. There is usually an auto setting for key frames, which allows a codec to decide how many key frames to<br />

sample, as well as a manual setting, which lets you specify a key frame sampling every so often, usually a certain<br />

number of times per second or over the duration of the entire video (total frames).<br />

Most codecs usually have either a quality or a sharpness setting (a slider) that controls the amount of blur<br />

applied to a video frame before compression. In case you are not familiar with this trick, applying a slight blur to your<br />

image or video, which is usually not desirable, can allow for better compression, as sharp transitions (edges) in an<br />

image are harder to encode, taking more data to reproduce, than softer transitions. That said, I would keep the quality<br />

(or sharpness) slider between 80 percent and 100% percent and try to reduce your data footprint using one of the<br />

other variables that I have discussed here, such as decreasing the resolution, frame rate, or bit rate.<br />

Ultimately, there will be a number of different variables that you will need to fine-tune to achieve the best<br />

data footprint optimization for any given digital video data asset. It is important to remember that each video asset<br />

will look different (mathematically) to a digital video codec. For this reason, there can be no standard settings that<br />

can be developed to achieve any given compression result. That said, experience tweaking various settings will<br />

eventually allow you to get a feel, over time, for the various settings that you have to change, in terms of the different<br />

compressions parameters, to get the desired end result.<br />

Digital Audio Concepts: Amplitude, Frequency, Samples<br />

Those of you who are audiophiles know that sound is created by sending sound waves pulsing through the air. Digital<br />

audio is complex; part of that complexity comes from the need to bridge analog audio technology, created with<br />

speaker cones, with digital audio codecs. Analog speakers generate sound waves by pulsing them into existence. Our<br />

ears receive analog audio in exactly the opposite fashion, catching and receiving those pulses of air, or vibrations with<br />

different wavelengths, and then turning them back into data that our brain can process. This is how we “hear” the<br />

sound waves; our brain then interprets the different audio sound wave frequencies as different notes, or tones.<br />

Sound waves generate various tones, depending on the frequency of the sound wave. A wide, or infrequent<br />

(long), wave produces a low (bass) tone, whereas a more frequent (short) wavelength produces a higher (treble) tone.<br />

Interestingly, different frequencies of light will produce different colors, so there is a close correlation between analog<br />

sound (audio) and analog light (color). There are many other similarities between digital images (and video) that will<br />

also carry through into your digital new media content production, as you will soon see.<br />

The volume of a sound wave will be determined by its amplitude, or the height (or size) of that wave. Thus,<br />

frequency of sound waves equates to how closely together the waves are spaced, along the x axis if you are looking at<br />

it in 2D, and amplitude equates to how tall the waves are, as measured along the y axis.<br />

Sound waves can be uniquely shaped, allowing them to “piggyback” various sound effects. A “pure,” or baseline,<br />

type of sound wave is called a sine wave (which you learned about in high school trigonometry, with the sine, cosine,<br />

and tangent math functions). Those of you who are familiar with audio synthesis are aware that other types of sound<br />

waves are also used in sound design, such as the saw wave, which looks like the edge of a saw (hence its name), and<br />

the pulse wave, which is shaped using only right angles, resulting in immediate on and off sounds that translate into<br />

pulses (or bursts) of audio.<br />

Even randomized waveforms, such as noise, are used in sound design to obtain edgy sound results. As you may<br />

have ascertained by using your recently acquired knowledge of data footprint optimization, the more “chaos,” or<br />

noise, present in your sound wave (and in new media data in general), the harder it will be to compress for a codec.<br />

Therefore, more complex sound waves will result in larger digital audio file sizes, owing to the chaos in the data.<br />

116<br />

www.it-ebooks.info

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!