I've always been somewhat skeptical on this topic, and I hope someone can inform me of any mistakes I may be making. I'm currently engineering the sound effects for a small game project. They've been sent to me as 16 bit files, and I'm required to make gain changes to them (as the game doesn't allow for adjusting the playback volume) so that everything is at the same level within the game. Now, if I was to, say, normalise the gain of one of the sounds to -12 dB using Sound Forge, any low-level information is lost and shows up as -inf dB. However, when first upsampling the bit depth to 64 bit floating point, I do end up with the low-level resolution, clearly making this the more favourable approach. But, what would I ideally need to do then? I've always thought to apply Waves IDR to add dithering, after which saving it as a 16 bit file. Is this the right way of going about it? Or should I not be adding dithering and saving the 64 bit float processed file as a straight 16 bit file? Or should I not upsample the original 16 bit file to begin with? These files won't be processed further in the game, so they will be played back as they are. However, I've heard that I shouldn't dither twice, but how do I know if the original 16 bit recordings were dithered or not, and wouldn't it always be better to dither rather than truncate or round the signal anyway? To confuse things even more, there is another recording I've been sent in 16 bit, which I've been asked to process with EQ and compression. So I've upsampled the bit depth to 64 bit float, applied the REQ and RComp, and dithered it using IDR. However, this time the file will be converted to MP3, and the playback level will be set within the game. Should I still be dithering in that case? Any help on this would be greatly appreciated.