Skip to main content

I've always been somewhat skeptical on this topic, and I hope someone can inform me of any mistakes I may be making.

I'm currently engineering the sound effects for a small game project. They've been sent to me as 16 bit files, and I'm required to make gain changes to them (as the game doesn't allow for adjusting the playback volume) so that everything is at the same level within the game.

Now, if I was to, say, normalise the gain of one of the sounds to -12 dB using Sound Forge, any low-level information is lost and shows up as -inf dB. However, when first upsampling the bit depth to 64 bit floating point, I do end up with the low-level resolution, clearly making this the more favourable approach.

But, what would I ideally need to do then? I've always thought to apply Waves IDR to add dithering, after which saving it as a 16 bit file. Is this the right way of going about it? Or should I not be adding dithering and saving the 64 bit float processed file as a straight 16 bit file? Or should I not upsample the original 16 bit file to begin with?

These files won't be processed further in the game, so they will be played back as they are. However, I've heard that I shouldn't dither twice, but how do I know if the original 16 bit recordings were dithered or not, and wouldn't it always be better to dither rather than truncate or round the signal anyway?

To confuse things even more, there is another recording I've been sent in 16 bit, which I've been asked to process with EQ and compression. So I've upsampled the bit depth to 64 bit float, applied the REQ and RComp, and dithered it using IDR. However, this time the file will be converted to MP3, and the playback level will be set within the game. Should I still be dithering in that case?

Any help on this would be greatly appreciated.

Comments

anonymous Sat, 01/14/2006 - 17:35

Michael Fossenkemper wrote: when you change a file in level or processing, and you are ending up with a smaller bit depth than how it's created, then you should dither.

How does this apply to my situation exactly? I start out with 16 bit...process at a much higher bit depth...giving me a higher bit depth than what I started with...or not? Where do I end up with a smaller bit depth than what I started with?

Another thing I came across while reading another topic here is that after having used Waves IDR on a high bit depth file, you should be using Sound Forge's bit depth converter to change the bit depth to 16 bit, but turning off both dither and noise shaping. Is this any different from simply saving the high bit depth file as a 16 bit file after having applied processing and dithering?

Michael Fossenkemper Sat, 01/14/2006 - 18:34

you are starting out with a 16bit file. you are processing in 64bit as you stated. you need to go back down now to 16bit. you should dither.

once you dither to 16 bits and do no other processing, it's a 16 bit file regardless of how it's saved. if you use waves IDR and dither to 16 bits and it's saved as a 24bit file. the last 8 bits are just zeros, you can just truncate to 16bit and it won't effect the sound.

anonymous Sat, 01/14/2006 - 20:09

Ah, sorry, I'm with you now. This makes sense, and it's good to know I've been doing it right. I also tested both saving the file in 16 bits, and first converting the bit depth to 16 bit with dither and noise set to "off" and saving it as the "default template", and both came out identical.

This still leaves one thing I'm unsure about though. What if one of the original 16 bit files I received had already been dithered from a higher bit depth down to 16 bits? What's the easiest way of telling? I know that Waves IDR adds a distinct amount of high frequency content at a low level, observable with a spectrum analyser, which I've seen in varying degrees on professional albums as well. But what if a "lower" noise shaping curve had been used? Should I still always be dithering when going from a higher bit depth to a lower one, regardless of what may have been done to the file previously?

This actually draws into a question I've had regarding mastering as well. If I was to receive a mixdown rendered with a sequencer in 16 bits, should I be dithering after processing the file (at a higher bit depth)? And using "ultra" noise shaping? I know most sequencer have the option of dithering the rendered output or not, but what happens if you disable this? Will the signal be rounded or truncated? And which would be better for mastering purposes (other than simply rendering at a higher bit depth)?

One final and somewhat related question I have is if it's safe to fade the dither noise out or not. Whenever I've looked at the waveforms of commercial tracks in the past, the start and end samples tend to be 0, with the level fading in or out rather than starting away from the DC line. Is it safe to say that I can fade the dither noise (and program material) in and out using the "fade out" tool in Sound Forge before saving the file in 16 bits?

Sorry, so many questions. Would really appreciate any answers to these.

anonymous Mon, 01/16/2006 - 17:34

Thomas W. Bethel wrote: Some good information on dither at (Dead Link Removed)

Thanks for the link. I have read it before, and did so again, along with many other articles, but I think I'm still stuck in the wrong way of thinking.

At first I thought that the difference between 16 bit and 24 bit was that 24 bit allowed for more intermediate sample values, but I now understand it only means an additional 256 steps of lower information? So there is just as much resolution in the upper 16 bits of the sound? But then what explains the staircasing effect you get when creating a low-frequency sine wave in 8 bit, or even in 16 bit? I think I'm lost in this whole quantisation and re-quantisation way of thinking, and how this relates to introducing noise. Where does the noise come from? Does the staircasing effect deviate the waveform in a way so as to create additional harmonics and high frequency noise?

Also, I rendered a plugin chain on a 64 bit float file in Sound Forge, with Waves IDR on the end. I then saved (all in 16 bit) one without any further processing, another by fading the dither out while still in 64 bit float mode, and the third by fading out the dither noise after saving in 16 bit (and saving it again overtop). I then selected the tail of the sound and normalised it, and was surprised to hear so much noise in the file I faded out while in 64 bit float mode. Where does it come from? The waveform looks identical to the one I faded while in 16 bit mode, and yet the latter doesn't have any artefacts during the fadeout portion.

anonymous Mon, 01/16/2006 - 18:57

Oh, it seems the added noise did come back after fading the file out while in 16 bit. You only hear it after re-saving it, as the fade sounds flawless at first. Why is this?

So basically there's no way of fading out dither noise and program material so that the start and end samples start appropriately at 0? How do others handle this? When I process a file, the waveform also shifts 1,534 samples forward. :roll:

x

User login