24 bit to 16 bit: what's lost?

Discussion in 'Mixing & Song Critique' started by Mundox, Nov 6, 2003.

Tags:
  1. mjones4th

    mjones4th Active Member

    Joined:
    Aug 15, 2003
    hmmmm.....

    I guess even the mighty mitz can be wrong. I always thought that analog media could capture information in the 200kHz range? Which is like dog sound.

    mitz
     
  2. Alécio Costa - Brazil

    Alécio Costa - Brazil Well-Known Member

    Joined:
    Mar 19, 2002
    HI Missi!
    Our brains are connected, bud!
    As I was reading your nice long post, I was thinking the same.
    Yes, I do mix with the 02R and its stereo out goes to a spare stereo track within the very same PT Mix 5.1.1 session rig.
    No fade outs with the 02R.
    I would like to add one or two more mix farm cards to my mix+ rig ( so reverb plugs could substitute the 02R/hardware units) and do the direct comparison between doing inside the box or using semi-direct outs + outboard gear.
    Of course printing efx is not a sin.
     
  3. teleharmonic

    teleharmonic Guest

    hi danv1983 :)

    I'm not trying to get bogged down in technicalities, and i am certainly no expert on the physics of sound, but the point i was making is that it is misleading to use the term 'sample rate' when referring to magnetic tape... tape and digital are different mechanisms that store information in different ways.

    While tape moves continuously it does not translate into an infinate sample rate. If a device had an infinate sample rate then, by definition, it would be capable of capturing and infinate range of frequencies (or, according to Nyquist and Shannon, just less than half an infinate range of frequencies :) ) A device performs no better than its capabilities allow regardless of how you visualize its functionality (e.g. visualizing 'choppy' digital samples versus 'smooth' magnetic tape rolling along) When the sound comes out of the speaker (now THAT'S analog) from both a tape system and a digital system the waves are waves. How GOOD the waves sound is up to you.

    I realize i'm getting into semantics here but i know that sometimes, for me, understanding some of the techno-babble actually helps me move past the babble and onto recording. Which is the idea...right :)

    As i said before, there is no issue of analog vs digital for me... just sound, beautiful sound.

    cheers.
     
  4. by

    by Guest

    I believe one of the main things people tend to forget is the whole idea of harmonics. With digital recordings at 44.1 there is definately a limited amount of that stuff that can be captured. Even if you don't hear these frequencies it bloody-hell does not mean they have no impact on the sound. Additionally, when summing together bunches of 44.1 tracks, the degration becomes more and more apparent. With higher sample rates (96k) you get lots more harmonics and adding those tracks together is where the importance really become more obvious. A fun trick to tryout is to convert all your 44.1 files into much higher ones, then mix them, then compare. I guess a question would be does the bit rate support this same idea? Something I've noticed is that simply adding some noise (more then what the normal 'dither' uses) can yeild a smoother sound on mix down.

    In the analog tape world when sounds are combined there is no number crunching obviously so all those frequencies aren't gonna be all chopped up shuffled around sounding. There is also automatic compression and a ton more distortion going on.
     
  5. teleharmonic

    teleharmonic Guest

    this is an interesting experiment Yon, i've often pondered if adding noise might help 'unify' tracks but have not actually spent the time to try it out. what kind of noise did you try and what kind of material was it?
     
  6. Spengler874

    Spengler874 Guest

    Hey...another former westchester-ite (croton on hudson) representing.

    Regarding sample rates...Nyquist theorem predicts that you DON'T lose anything between samples, so long as your sampling rate is double the highest frequency you are recording. So really, 44.1 is fine because only dogs are listening above 22k.

    Spengler
     
  7. sdevino

    sdevino Active Member

    Joined:
    Mar 31, 2002
    Wow this thread is full of almost true statements.

    Spenglar is right, A well designed digital system does not lose anything between the samples that is within the bandwidth of the system. It has to do with resonance and bandlimited response of sinx/x impulse reponses applied to a perfect brickwall filter. So a 44.1 kHz sample rate will capture EXACTLy the same data as a 96kHz sample rate if they are bothe preceeded by the same 22kHz low pass filter.

    If an analog processor has a 20kHz passband and a ADC has the same 20kHz passband then both will have the same frequency information, and since a 24 bit converter is capable of 144dB of dynamic range then it will have the same net detail as a well designed anlog path which will be limited to around 110dB dynamic range (i.e. they are both in effect limited to the analog's smaller dynamic range).

    Next:
    while many claim that frequencies above 20kHz effect how you or what you hear in the audio range, no one has been able to prove this to date. That does not mean it is not true, it just means that every attempt to prove it that has been published was inconclusive (including a very detailed experiment whose findings were just presented at the AES last month in NYC).

    Next: Sample rate conversion does not change the absolute level. The peak levels will vary but the RMS values will stay the same. The peak levels of a waveform sample at its halfpower points will be 1.414x higher when passed through a reconstruction filter. This is correct and represents an undistorted signal. The problem comes when someone normalizes the halfpower point to full scale. That is engineering error and poor use of gain stage management, not an error in the design of the digital mixer.

    It is also a myth that sample rate conversion from 88.2 to 44.1 is better than 96 to 44.1 or any other combination for that matter. Sample rate conversion is performed by resampling to a very high sample rate then resampling again at a lower sample rate or by convolution of a time domain response curve at the target sample rate. They dont just take every other sample.

    Kurt was just about the only person on this thread that spoke complete truth.Analog probably is not as accurate as digital but some people (especially Kurt) like the sound of analog better anyway. Which brings up a good point, despite all the scientific mumbo jumbo I talked about above, none of us have any idea how well any of the manufactures implement any of this "theory" you really need to trust your ears and experiment. the physics is never as simple as we describe it and it certainly is never implemented as well as it should be.

    Steve
     
  8. mjones4th

    mjones4th Active Member

    Joined:
    Aug 15, 2003
    No way.

    According to that, If I brickwall lowpass a wave at 500Hz, and digitally record it at 2kHz, it will contain the same information as if I digitally recorded it at 44.1khz. No way.

    By resonance I assume you mean the interplay between those discrete samples when they reach our ears?

    My ears tell me that my delta66 converters sound better at 96k than at 44.1k. The theory of wave propagation, combined with digital representations of said waves, tells me that the higher the frequency of of digital representation, the closer you get to the original wave.

    To simplify matters, I used the analogy that the information between samples is lost. In theory, what really happens is a second of audio is represented by 44100 stairs, or 96000 stairs, etc. Each sample takes 1/44100th of a second to play. In that time the original analog information has done many things, (changing intensity, frequency makeup, etc.) but the digital representation of that 1/44100th of a second of analog information is static for that lenght of time.

    So Imagine a smooth sine wave a second long. Now divide it up into 44100 regions. From the beginning of each region to its end, draw a horizontal line capturing the average level of that region. That is what a digital representation looks like. A bunch of stairs going up and down, up and down....

    Now if we divide those regions in half to create 88200 regions, then we get closer to the original waveform. we get smaller steps at closer intervals. Divide them in half again.... etc.

    But we will ALWAYS have steps. Always. That means the digital waveform, although a very accurate representation of the analog original, is still imprecise and lossy. Furthermore, the higher the sample rate, the closer we get to the theoretical limit, the analog waveform.

    Not trying to trump you or anything, Its just that I learned differently. I don't know the math behind it offhand, but I can go upstairs to the study and find it, easily. What I am doing is explaining the gist of the theory, as I learned it.

    You are correct.

    There are two ways to convert a sample rate, as you mention. the first is to essentially treat it as if were analog (target: infinitely high sample rate; best we can do: very high sample rate) and sample it.

    The second is a what I call interpolation, which is more commonly used it timestretching. Synthesize each sample in the new waveform from those preceding and following it in time in the old waveform. the higher the quality of the interpolation, the more samples are analyzed to synthesize each new sample.

    But either way, you still lose information.

    I like to speak in english, it helps people to grasp the concept, the math is the easy part.

    mitz
     
  9. Randyman...

    Randyman... Well-Known Member

    Joined:
    Jun 1, 2003
    Location:
    Houston, TX
    I agree the digital info will be "Stepped" less and less with hi-res sample rates, but you need to look at the D/A'a analog output to really compare 2 digital signals as we hear them.

    Yes, there is slight change in between a 20KHz wave in 1/44,100 of a second, but nothing the D/A won't be able to re-construct up to its Nyquest frequency. A 20Khz sine won't look any different on the output of the D/A in 44.1K or 96K. All the 96K will do is take 4 pictures of the 20K wave instead of 2. All the D/A needs is 2 samples to re-construct the wave accurately.

    But then again, I don't know $*^t! Me beats things for fun! (I'm a drummer)...

    Anyway, this is a 16 to 24 bit thread, not a hi-res sample rate thread. Sorry :(

    Later :cool:
     
  10. sdevino

    sdevino Active Member

    Joined:
    Mar 31, 2002
    Yes way. This is basic nyquist theory and except for the lack of perfect brickwall filters absolute fact. I have proven this to myself in many many DSP based audio analysers I have built over the years when I worked for Teradyne.


    No I mean the resonance of the electrons passing through a band limited circuit. The little stair steps do not make it through the reconstruction filter because the steps represent very high frequency information which a band limited circuit cannot keep up with. The sampled data represents the analog data in terms of power and phase but stored in slightly different form. Passing through a brickwall filter causes the original time and phase data to be restored.
    The difference you hear has to do with the reconstruction filter implementation, not the sample rate. And NYquist theory says and has been proven many times that I can perfectly reconstruct a perfect sinewave from 2 samples plus sinx/x compensation. So if I sample a 20kHz signal at 40kHz, I will end up with mottly looking stepped waveform close to a square wave. When I pass that back through the reconstruction process I will have exactly the same sinewave I started with minus sinx/x rolloff (although this is typically compensated for in a well designed converter.

    This is common misconception. Anything that happens between the little 1/44000th of a second steps is not audio. So NO audio is lost. The arguement over whther or not these higher frequencies are detectable is a separate discussion. It is also an area where there has been no successful attempt to prove that it matters or does not matter.

    All your descriptions are correct, except for the lossy part. It is only lossy if you conclude that infinite bandwidth is needed for audio reproduction. Those little stair steps represent the sample frequency with modulation bands on either side at the sample source frequency. They do not contain information that was part of the audio. If you look at the frequency domain of a waveform sampled at 44.1kHz you will see the origina waveform data and a 44.1kHz signal with upper and lower sidebands of the same source signal.


    If you sample at 88kHz or 192kHz the DC to 20kHz portion of the spectrum is IDENTICAL i.e 100% the same. The only thing that is different is there is a modulated 88kHz or 192kHz component rather than a modulated 44.1kHz component. Last time I checked 44.1kHz is no more audio than 88kHz or 192kHz. Again the arguement over whether super audio is important is a separate discussion.


    No trump taken. You need to pay attention to in-band and out of band components. Do you know that digital video only samples at 4x the color subcarrier? And you do not see little stairsteps in the reconstructed waveforms AND no color information is lost between samples. You also need to think of stored digitzed data as samples needed to represent power and phase. The ability to capture phase accurately is far more dependent on the precicion of the conversion and the stability of the sample period (i.e. stable worclock).

    I can recommend an excellent text for you:
    "The FFT Fundamentals and Concepts" by Robert Ramirez published by Prentice Hall in 1985 (that's how long I have been working with digital audio and measurement systems).

    And I can recommend a website with some basic reading:
    http://www-personal.engin.umich.edu/~jglettle/section/digitalaudio/intro.html#A

    And some excellent reading from the master of digital audio theory, Stanley Lipshitz published by the AES:
    http://www.aes.org/events/111/workshops/W1.cfm

    I am probably not as smart as any of these guys but I am pretty sure that they know what they are talking about.
     
  11. danv1983

    danv1983 Guest

    Okay, but this is in a "well designed system", and most amateur level soundcards will vary drastically in data converting/capturing performance between settings. Right?

    Additionally, if I convert a stereo recording from 44.1khz to 96khz within logic and then change the operating samplerate of the project accordingly - my uad-1 plugins sound better even though there aren't any higher signals existent in the wav than 44.1khz provides. Is this phenomena simply because my soundcard is not "well designed"? Strictly based on your explanation between 44.1 and 96 I would assume that if my soundcard were better logic running at 44.1 with a 44.1 wav, with a few dynamics plugins ontop should sound the same as: Logic running at 96khz with the same 44.1khz wav now converted to 96, then with the same plugins ontop.

    P.S.
    What would you say is the best bang per buck word-clock Steve?
    Could an rme multiface improve considerably with a good external clock?

    Thanks,
     
  12. sdevino

    sdevino Active Member

    Joined:
    Mar 31, 2002
     
  13. mjones4th

    mjones4th Active Member

    Joined:
    Aug 15, 2003
    Hey Steve, you may have just given me a topic for my doctor's Thesis, when I get there.

    Everything I learned in my studies, which, by the way were not audio specific, lead me to conclusions different than yours. I don't know if you're right or not, but Little Miss Morgan Taylor is not willing to give me the time to find out.

    My interest has definitely been peaked tho, and thanks for that.

    mitz
     
  14. sdevino

    sdevino Active Member

    Joined:
    Mar 31, 2002
    Hi Mitz,
    I am glad I helped. I am reporting not only what I have read, but also what I have proven to my self over the past 20 years doing DSP applications engineering. So I have actually had to apply sampling theory to real commercial products used to test many of the ICs used in modern audio equipment.

    Read Stanley Liptshitz papers and tutorials when you get a chance. Focus hard on bandwidth and it will all come together.

    Good Luck!
     
  15. falkon2

    falkon2 Well-Known Member

    Joined:
    Mar 17, 2003
    You know, a crazy idea that occured to me a few days ago is the fact that digital plugins perform their algorithms upon the samples themselves as mathematical equations, whereas when these samples are played back, we are hearing an analog reconstruction of everything within the sampling rate's bandwidth (Assuming a good D/A that does it's job well, of course)

    Take a 21kHz signal represented on 44kHz for example. The sampling points representing this signal would appear to be a 22kHz signal oscillating in amplitude according to another sine wave. ((sin a + b) + (sin a - b) = (sina) x (sinb), where sin is equivalent to cos. In this example, 'a' would be 22kHz and 'b' would be 1kHz). Granted, when the signal is played back through a good D/A, everything is brickwalled and the sin a + b signal is filtered out, and we get our original pure signal of 21kHz.

    That's all fine and dandy, but what about digital processing that relies on the amplitude of a "signal"? These plugins are going to view the samples closer to silence as lower amplitude, rather than part of a constant-amplitude signal.

    In our example above, this could mean that a compressor with low attack and release times is going to bring down the volume of certain samples and not others, though ALL the samples represent a sine wave that is constant in volume. This will just invariably add in lots of weird harmonics and not give the intended result of the compressor.

    Of course, what I gave was quite an extreme example, but the fact remains that when plugins work on SAMPLES rather than SIGNALS, all types of weird artifacts can appear.

    The workaround to that would be if plugins oversampled the track before working on it, then downsampled on the output. However, this is highly inefficient.

    The best solution would be to record at high sample rate, perform mixing, then downsample. Of course, whether the additional hard disk space, CPU usage, and general headache is worth eliminating something that might be considered "imperceptible" is still open for debate.
     
  16. slowgear

    slowgear Guest

    BTW, Here's IMHO an excellent book about digital signal processing, 640 pages, downloadable for free. I've spent couple of nights reading this book and still have 250 pages to go but I don't think I've ever learned so much about anything in such a short time.


    http://www.dspguide.com
     
  17. slacovdael

    slacovdael Guest

    don't cd's only utilize 44.1 khz???why waste money on a 24 bits when what you are most likely going to record ON is not going to use it?
     
  18. ghellquist

    ghellquist Member

    Joined:
    May 25, 2004
    Back to 24 vs 16 bit.

    CD uses 16bit. This is plenty enough for most contemporary music. After all, almost everything is compressed to sound good on the radio and has a very limited dynamic range. It also works rather good also for "difficult" music, as example classical acoustical, when used carefully.

    I find it useful to consider the number of bits as the available Signal-to-Noise ratio. To keep it short and rounded, each bit is about 6dB of SN. So 16bit i about 96dB, 24 bits is 144dB. If your music stays on about -10dB all the time you still have 86dB of SN, in practice the hiss cannot be heard. If you record music with a dynamic content of 50dB, the noise will be heard in the softests parts of the music.

    The main practical reason for me tracking at 24bits is because it allows me to be lazy with the levels when recording. When I try to record at 16bits I have to very careful about the levels, too high and it clips which sounds horrible, too low and the softer parts of the music disappears into the noise floor. With 24bits, I can simply set levels so that strongest music is at about -15dB without problems.

    24bits generally is a bit overkill when tracking. I would say that 20bits should be enough for almost all applications outside the esoteric ones. And if you look at real world sound cards, they do not give you true 24 bit performance when you take into account the internal analog parts of the card. It is more like 110dB SN in the very best cards, ie around 20bits.

    Next thing is that inside the DAW you will need more bits doing the internal calculations. Errors introduced at every calculation tends to accumulate, so you want each one to be small. This is a discussed in a number of other threads.

    Gunnar.
     
  19. Nemesys

    Nemesys Guest

    This should pretty much answer your question, albeit in an extremely roundabout way which may take a very long time to undertsnad, but once you understand it, you'll have an understanding of quantization way beyond what recording engineers know about. Its an article written by Robert Gray of Stanford University who is not only one of the leading experts on quantization research in the world, but he is also a fantastic writer to boot and he can make even the complicated sound simple.

    Read this:

    Link removed

    This is truly one of the finest 'survey articles' (read: a "laymens article") ever written on the subject of quantization!

    It is 63 pages long so you should probably print it out. It was originally published in the October 1998 edition of "Transactions on Information Theory" which was the special Anniversary edition of Claude Shannons landmark Oct. 1948 paper.... so basically it had to be a fantastic article to be included in that Special Edition.

    The article is pretty broad, so what your looking for is to cover scalar uniform quantization, and read the parts on quantization error bounds for various bitrates (aka "wordlengths").

    However, by laymens article.... I mean "laymen" in terms of a person who is already well versed in electrical engineering and information theory with the knowledge equivalent of a masters degree..... but in any case.... if you work through this article for a few months, most of it could eventually be understand by somebody whether they have a degree or not in engineering.
     
  20. Snatchman

    Snatchman Guest

    Man, and I thought I almost understood, if I wasn't confused, I'm sure the hell is now!... :? .... :D
     

Share This Page

  1. This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
    By continuing to use this site, you are consenting to our use of cookies.
    Dismiss Notice