Hi, I am curious about dithering. Now, I'm coming from a programming stand-point on this. Here's my thoughts: In 24-bit sound, the amplitude of the wave ranges from -2^23 to 2^23 units (not decibels, but numeric values). In 16-bit sound, the amplitude ranges from -2^15 to 2^15. 2^23 = 8,388,608 2^15 = 32,768 So to convert 24-bit sound to 16-bit sound, all one needs to do is multiply the value of each 24-bit sample by 32,768 / 8,388608 and put that new value in a 16-bit buffer. When track volumes are adjusted the program does the same thing. It just figures out what the ratio to multiply by is (according to the slider) and multiplies each sample by that. So why is dithering used when coverting from 24-bit mixes to 16-bit? If, for example, someone recorded and mixed all in 16-bit, they would still have the artifacts that come from math error in multiplying by fractions. Why don't they use dithering to mask it? Maybe I'm missing something and I should stop rambling. Could someone shed some light on this for me? Thanks.