Skip to main content

If a client brings you a 24 bit recording and only wants a mastered digital version of it, do you go ahead and dither it down to 16 bit even if they only want it for online distribution/not for CD?

Comments

TrilliumSound Sat, 03/24/2012 - 07:19

Not necessarily. One could take a 24bit file and encode it to mp3 directly, stating that it is sounding better than from a 16 bit source. There are many different encoders out there, some may sound "better" than others also. Either way, I would suggest to turn down the output "ceiling" by .5dB on the source file to prevent any overs on the mp3.

bouldersound Sat, 03/24/2012 - 10:59

rbf738, post: 386949 wrote: So it sounds to me that in general when mastering, the bit depth will be dropped to 16 if it's in 24 or 32, yes?

Mastering for what? If a CD release is the primary goal then you'll have to go to 16 bit at some point in the process, which will require truncating. I have heard it suggested to encode mp3 files from the highest quality file type you have available to give the encoder more to work with, but I can't confirm that as preferable.

RemyRAD Sat, 03/24/2012 - 15:48

You record and master at your highest qualities that you would normally use. Your final master may in fact be 24-bit, 32-bit, 88.2/96/192 kHz. And that's what you deliver to your client as the final master. Then when your client asks you for a CD/DVD release version or for the Internet, you merely make your choice of types of dither to dummy down your pristine final master to your production release masters, in the format of your clients requests. And you have the choice and responsibility of how to dither down as there are numerous different types and kinds of dither that will affect the sound of your 16-bit, 44.1 kHz, .wav & MP3 files, WMA, iTunes, etc.. So use your headphones and use your speakers and choose whatever dither algorithm you feel relates best to you and your clients product. I can guarantee you this much, your client and virtually everybody else in the world won't give a hoot what kind of dither you utilized when they are driving down the highway, windows open at 62 mph/100 km/h. It may not even make much difference on a MP3 release with all of the MP3 artifacting going on that's completely audible? So then who cares about dither? It's important for a CD release but then who is purchasing CDs anymore?

What's it all about...Alfie?
Mx. Remy Ann David

Laarsø Mon, 05/07/2012 - 08:12

If your product is only AAC (or MP3), which would be bizarre, then there is no reason not to decimate from 24 bits. However, since every digital audio gain change requires dither not to produce quantization error which is correlated to the signal, if it turns out that the decimating codec doesn't use dither, internally, it would be better to use a different one.

As I understand it, the iTunes Store requires the upload to be LPCM. They do the decimation, over at Infinity Lane, or some such place. hehe. So, for the iTunes Store, you should use dither, too, but possibly only when going from the output of the brick wall limiter to the AES bus (at either 24 or 16 bits per word).

As for "encoding" and "decoding," these words are, in my fur-brained opinion, not accurate descriptions of what goes on in an MP3 or AAC type of conversion. This is not a way to encode a message that can be decoded without error. In fact, it is not a way to encode a message that can be decoded without SpaceMonkeys! This is not encoding. Uncompressed (data-wise) digital audio, itself, is encoded analog. So, the more accurate way to describe AAC and MP3 is that they represent the decimation of encoding. Every file is essentially reduced to 1/10 its original size. Dirty codec.

Cheersø,
Laarsø