Skip to main content

OK.This is a little weird and I should know this, but I really am not sure. :confused:

Comments

teleharmonic Thu, 11/06/2003 - 09:17

i'm no expert, so i welcome corrections if i spreading misinformation here, but all of the theory that i have read indicates that 24 bits gives you increased dynamic range... however whether you are using/need all that dynamic range is another issue entirely.

if the track has less than 96dB of dynamic range then having the extra 8 bits of data will not get you anything you didn't have with 16 bits.

since i cannot listen to a guitar playing and tell you how many dBs of dynamic range it covers (and i don't want to spend the time to scientifically measure it) how does this knowledge translate practically? i track with 24 bits so that i can keep my digital meters well below 0 and avoid any digital overs. after that, and after i apply compression etc. it may be, and probably is, the case that i don't have more than 96dB of dynamic range.

In the controlled environment of a mixdown i can make sure that if i am dubbing to a 16bit source, as you are, that i am not getting digital overs (in a way that could not be predicted during tracking) but still utilizing as much of the 16bits as i can.

long answer short, for rock/pop music, no... you are probably not really losing anything by dumping down to 16 bits if you are maximizing your levels. classical or jazz might be a different story as they may be more in need of that extra dynamic range.

teleharmonic Thu, 11/06/2003 - 10:03

mitzelplik,

if you have less than 96dB of signal to noise dynamic range (AKA 16bits of dynamic range) the lowest 8 bits would actually contain nothing... correct? so while you may be 'truncating' you would be cutting off... nothing. so nothing would be lost except 8 bits of nothing.

i ask for my own understanding as 'truncating', while that may be the technical term, sounds inherently insidious and bad.

:)

mjones4th Thu, 11/06/2003 - 11:33

teleharmonic,

Its the highest 8 bits that contain nothing in the recording you are using as a reference.

Just imagine looking at the LEDs on a circa 1980's tape deck. As the signal gets louder, the LEDs grow from green to yellow to red at the top. What you lose in truncating is a significant portion of the green. You don't lose the red.

Now what happens is the quantization effects of this lead to a lot of ugly noise. So you suffer even more if you truncate a quieter signal.

What's really interesting is that dithering is, in a sense, adding a small amount of broadband noise to hide the quantization.

mitz

teleharmonic Fri, 11/07/2003 - 08:47

i think perhaps we are mixing up our analogies somewhat (or just visualizing it differently... which happens a lot, i've found, when imaging sound) but i do understand what you are talking about mitz. by the "lowest" 8 bits i was referring to the quietest 8 bits... which would indeed be contained in the green leds on the meter (the green bits?).

but what i was really wondering was if the quietest 8 bits were in fact silence what damage truncation would actually be doing.

in doing some deeper looking it seems that, as usual, the differences in results you get from truncating and dithering vary depending on the material.

in more dynamic detailed material it seems the noise from quantization errors could be problematic without dithering(as mitz said).

so, as usual, the answer seems to be... i dunno... do you hear a difference?

the more i learn the more it all seems to come back to that... but learning is such fun.

AudioGaff Fri, 11/07/2003 - 11:45

so, as usual, the answer seems to be... i dunno... do you hear a difference?

the more i learn the more it all seems to come back to that...

Now you've learned a valueable lesson. More people could answer their own questions and learn more by taking the time to apply that lesson.

The result of 24 to 16 without noise shaping or dither is called truncation distortion. How much audible damage is done by it is related to the type music and how the headroom was used going through the 24-bit conversion to begin with. It is a common practice that is done all the time. Use those tracks for the less important and/or less dynamic sources and you should be fine. Chances are that you or anybody else won't ever notice it in the end mix or that your monitors are good enough to hear it. There are far more critical things to worry about.

KurtFoster Fri, 11/07/2003 - 12:39

I myself, prefer analog audio. To my ears, it sounds the best. I am not saying it is the most accurate, just that I prefer its sound.. I don’t think anything we have come up with at this point is a “mirror perfect” representation of real world audio.

That being said, I work in digital for many reasons. It is more affordable, for me and my clients. Maintenance is an issue for me. Property tax each year and space to house a big 2” machine, a ½ track and a large format console can add a lot of numbers to the bottom line. The ability to manipulate the audio is in demand. Clients look for it. Random access is a real time saver. The ability to completely recall a mix with a keystroke is phenomenal. And it sounds fine. Not as good as a 2” and a large format console, but good enough.

I have Cubase VST 5.1 and I run it at 24 bit 44.1, through two ADAT AI-3’s for 16 analog and 2 s/pdif insanoutz. I monitor and mix at 16 bits through a Fostex CR220 CDr burner via s/pdif. This CDr has a dithering feature built into it and it is connected to a Nakamichi 410 all discreet stereo pre to a pair of Haffler 3000’s. I can also monitor at 24 bits through the AI-3’s and a Mackie SR24 vlz into the Nakamichi ...

Now I may be fooling myself but I think the digital output of the DAW (s/pdif at 16 bit) sounds waaaay better that listening to 24 bits through the Wackie! I think it’s the Wackie that gooeing up the works. But the point is, even 16 bit can sound very good, if it is done right. In some situations it can even sound better that 24 bits done poorly (through a Wackie or similar) .

All that said, I think keeping the sample rate at 44.1 throughout the production process is the best way to avoid sample rate truncation.

Alécio Costa Sat, 11/08/2003 - 21:21

Kurt, weren´t you meaning sample rate conversion?

One of the things with Pro Tools TDM systems bounce to disk that gets nasty when going from 48k/24 to 44k/24 and then finnaly dithering to 44k/16 ( many people misconvert going diredctly from 48k/24 to 44k/16) is that the process itself changes your master levels.

I reported this at the DUC in 2002 and a few other guys around the globe described the very same symptons.

Even if you define a ceiling, let us say -0.1dB, you shall end up with some red lights at the final product.

This proves that frequency conversion, at least with the mighty PT, is not 100% accurate.

Now I also work at 44k/24, eliminating one bounce, and avoiding artifacts from a frequency conversion.

Sorry for being quite off topic. :p:

mjones4th Mon, 11/10/2003 - 21:10

Let's simple it up:

An expansion on gaff's statement: The closer you get to the 24 bit maximum, the less truncation will bother you. If you are starting with a super hot kick track, it will be much less noticible than if you were starting with a classical violin.

The reason?

The quietest parts of a digital recording (like the very end of a long reverb tail) use bits 1-8. The middle of the line signal fall somewhere between 9-16. The loud stuff is up there at 17-24 bits.

When you truncate to from 24 to 16 bits, you are effectively throwing away bits 1-8.

So the violin, which should fall, for the most part, between 9-16 bits, will become 1-8 bits of the new 16 bit audio recording. Right down there with the silence. But we know its not silent, right?

That is the effect of bit-depth truncation.

Alecio and Kurt:

Sample rate conversion will exibit a loss of data on any DAW. I'll explain it by example:

At 48k, you record 48000 samples per second. At the very beginning of that second, you're at sample 1. 1/48000th of a second later, you're at sample 2, 2/48000ths is sample 3 and so on.
Now if you convert to 44.1k, sample 1 is handled, no problem. 1/48000th of a second later, the original 48k recording is presenting sample 2. However your 44.1k file has not reached sample 2. So how does the data get incorporated into the destination of 44.1k?

By a process called interpolation.

Sample 1 at 44.1k is identical to sample 1 at 48k. Sample 2 at 44.1k takes samples 2 and 3 at 48k and uses an algorithm to determine what the sound is at 1/44100th of a second, which is somewhere in between 2/48000 and 3/48000 of a second. In other words, sample 2 of the destination format (44.1k) is interpolated from samples which come before it and after it in time, in the source format (48k).

Interpolation is a necessary evil, we can't convert sample rates without it. The better interpolation algorithms take a look at several source samples before and after the destination sample to determine what the sample will be.

How does this affect us real world? Well just imagine that in your source file (48k) you have a transient. Chances are it happens somewhere in time that is not accounted for in your destination format (44.1k) How does it account for it? By moving that transient either forward or backward in time, and smearing it over two or more samples.

Of course, this is separate and distince from bit depth truncation, but the subject came up :p
mitz.

mjones4th Mon, 11/10/2003 - 21:15

Originally posted by Kurt Foster:
I myself, prefer analog audio. To my ears, it sounds the best. I am not saying it is the most accurate, just that I prefer its sound.. I don’t think anything we have come up with at this point is a “mirror perfect” representation of real world audio.

Nope, no mirrors here, but recording analog is sampling at a sample rate of infinity, much more accurate than digital will ever be. 192k, 384k? Bah.

When you sample a sound at discrete time slots, you lose everything in between those time slots.

mitz

I ODed on theory in undergrad ;)

anonymous Mon, 11/10/2003 - 22:24

Nice thread guys,

If you work at 96khz and then dither down to 44.1khz, You are essentially just making things sound worse than if you had just started working in 44.1khz, am I right? I've noticed a lot of plugins sound better at 96, but I've never a/b a source through one on 44, and through one dithered from 96 - 44 before.

Hey Missilanious!! Westchester representin' - I used to live in Mamaroneck - salutations.
:c:

falkon2 Tue, 11/11/2003 - 06:20

I'd say 96 to 44.1 is a LOT safer than 48 to 44.1 - The ratio of the former is greater than 2. Practically anything captured on 96 that is lost in the conversion to 44.1 would be lost anyway in the A/Ds when recording directly to 44.1.

48 to 44.1, on the other hand, can introduce a whole set of problems. For starters, digital recordings ALWAYS need a rolloff that tapers down to silence by half the sampling rate. 24kHz should be silence at 48kHz sampling rate, for example. This is to prevent the appearance of aliased ghost notes from occuring when the A/Ds pick up signals greater than the media can handle.
Recording at 48kHz introduces one rolloff. Converting it to 44.1 adds ANOTHER rolloff to lop off anything above 22.05kHz. These rolloffs overlap and can technically screw the treble a bit.

I suppose a really cruddy analogy would be 48->44.1 conversion being akin to dividing three apples between two people, whereas 96->44.1 conversion is like dividing nine apples between two people. In the former situation, one person is more likely to feel cheated than in the latter situation.

anonymous Tue, 11/11/2003 - 12:33

but 88.2 to 44.1 is the best, considering its a 2 to 1 ratio with no decimels in the ratio make the conversion math easier, than again 44.1 is the best if your going to finish at 44.1 cause theres no sample rate conversion, but than again your plugins aren't going to be running at higher resultutions either, which is very evident that at highr sample rates plugins like reverbs sounds better than if you would use them at a lower sample rate, so I would say why work at 96k if you can work at 88.2k unless your projects going to end up on dvd's. And to my comment on the HD BTD fixed, I'm not talking about sample rate conversions cause thats going to sound different after the conversion I'm talking about the summing at the end of the 2-bus in protools. When using a mix system which uses an older TDM mix engine as opossed to the redisigned HD mix engine (TDM1 and TDM2 mix engines respectively) the problem many engineers were complaning about (not really a problem if you think about it) is that after a bounce to disk (if the summing was done in PT,and not just bouncing down at two track) it would sound different from what you heard out of your monitors when it was being played strait out of protools (not drastic) even when the bounce was of same sample rate and bit depth. When HD came out i don't see this problem anymore. And I did a test. I recorded a session at 44.1 24bit using my HD rig. Without any plugins I bounced that down without any changes to the sample or bit depth. That was the bounce off of HD. I took the same session to my schools mix plus rig via firewire. Loaded the session, again with no plugs or sample rate or bit depth conversions, and bounced that down. Both bounces were done using using tweakhead, 24 bit, aiff. I reimported both of those bounces and flipped phase of one, what I heard was some ambiance, upper mids, top end, a little bit of low end and a little bit of low mids, which says that is the difference of the two cause anything that is similar gets cancelled out. Alecio wouldn't see that because if I'm correct he uses an 02R to mix, so if I'm correct he'll mix in the 02R, which gets summed at the 02R's mix bus, and then he rerecords the 2 track back into protools or an outboard recorder.

teleharmonic Tue, 11/11/2003 - 13:21

Originally posted by mitzelplik:
Nope, no mirrors here, but recording analog is sampling at a sample rate of infinity, much more accurate than digital will ever be. 192k, 384k? Bah.

When you sample a sound at discrete time slots, you lose everything in between those time slots.

This has no bearing on the practical discussion going on here but i thought that this statement is a little misleading. Analog tape (analog information) is just as limited by bandwidth restrictions as digital information is. If an analog recorder can record up to 45kHz (a good analog recorder at that) then that's what it can do... no more, no less. The same frequency can be captured digitally using a 96kHz sampling rate (with some to spare). That's what it can do, no more, no less. Call it discrete, call it continuous... that bandwidth restriction is the limit of possible information that can be captured.

In addition, the 'accuracy' of analog recording is limited by the amount of noise... as any dynamic changes that happen which are smaller than the size of the noise floor are indistinguishable from the noise. This limits the possible resolution of analog.

THAT being said, the band i am in just recorded bass and drum track to reel to reel because it sounds friggin fantastic (to us). That was the sound we wanted.
:)

anonymous Tue, 11/11/2003 - 14:26

aye teleharmonic.

but sample-rate is different in the digital domain.
mitzelpik is saying that the sample-rate of analog gear is infinity - sure a digital sample rate of 96khz (half of the rate - possible freq range) will reach the same freq. as a good tape rig, but it will still be sampling less frequently than tape.

Alécio Costa Tue, 11/11/2003 - 20:04

HI Missi!
Our brains are connected, bud!
As I was reading your nice long post, I was thinking the same.
Yes, I do mix with the 02R and its stereo out goes to a spare stereo track within the very same PT Mix 5.1.1 session rig.
No fade outs with the 02R.
I would like to add one or two more mix farm cards to my mix+ rig ( so reverb plugs could substitute the 02R/hardware units) and do the direct comparison between doing inside the box or using semi-direct outs + outboard gear.
Of course printing efx is not a sin.

teleharmonic Wed, 11/12/2003 - 06:01

hi danv1983 :)

I'm not trying to get bogged down in technicalities, and i am certainly no expert on the physics of sound, but the point i was making is that it is misleading to use the term 'sample rate' when referring to magnetic tape... tape and digital are different mechanisms that store information in different ways.

While tape moves continuously it does not translate into an infinate sample rate. If a device had an infinate sample rate then, by definition, it would be capable of capturing and infinate range of frequencies (or, according to Nyquist and Shannon, just less than half an infinate range of frequencies :) ) A device performs no better than its capabilities allow regardless of how you visualize its functionality (e.g. visualizing 'choppy' digital samples versus 'smooth' magnetic tape rolling along) When the sound comes out of the speaker (now THAT'S analog) from both a tape system and a digital system the waves are waves. How GOOD the waves sound is up to you.

I realize i'm getting into semantics here but i know that sometimes, for me, understanding some of the techno-babble actually helps me move past the babble and onto recording. Which is the idea...right :)

As i said before, there is no issue of analog vs digital for me... just sound, beautiful sound.

cheers.

anonymous Wed, 11/12/2003 - 09:36

I believe one of the main things people tend to forget is the whole idea of harmonics. With digital recordings at 44.1 there is definately a limited amount of that stuff that can be captured. Even if you don't hear these frequencies it bloody-hell does not mean they have no impact on the sound. Additionally, when summing together bunches of 44.1 tracks, the degration becomes more and more apparent. With higher sample rates (96k) you get lots more harmonics and adding those tracks together is where the importance really become more obvious. A fun trick to tryout is to convert all your 44.1 files into much higher ones, then mix them, then compare. I guess a question would be does the bit rate support this same idea? Something I've noticed is that simply adding some noise (more then what the normal 'dither' uses) can yeild a smoother sound on mix down.

In the analog tape world when sounds are combined there is no number crunching obviously so all those frequencies aren't gonna be all chopped up shuffled around sounding. There is also automatic compression and a ton more distortion going on.

teleharmonic Wed, 11/12/2003 - 10:36

Originally posted by Yon:
Something I've noticed is that simply adding some noise (more then what the normal 'dither' uses) can yeild a smoother sound on mix down.

this is an interesting experiment Yon, i've often pondered if adding noise might help 'unify' tracks but have not actually spent the time to try it out. what kind of noise did you try and what kind of material was it?

anonymous Fri, 11/14/2003 - 09:55

Hey...another former westchester-ite (croton on hudson) representing.

Regarding sample rates...Nyquist theorem predicts that you DON'T lose anything between samples, so long as your sampling rate is double the highest frequency you are recording. So really, 44.1 is fine because only dogs are listening above 22k.

Spengler

sdevino Fri, 11/14/2003 - 12:04

Wow this thread is full of almost true statements.

Spenglar is right, A well designed digital system does not lose anything between the samples that is within the bandwidth of the system. It has to do with resonance and bandlimited response of sinx/x impulse reponses applied to a perfect brickwall filter. So a 44.1 kHz sample rate will capture EXACTLy the same data as a 96kHz sample rate if they are bothe preceeded by the same 22kHz low pass filter.

If an analog processor has a 20kHz passband and a ADC has the same 20kHz passband then both will have the same frequency information, and since a 24 bit converter is capable of 144dB of dynamic range then it will have the same net detail as a well designed anlog path which will be limited to around 110dB dynamic range (i.e. they are both in effect limited to the analog's smaller dynamic range).

Next:
while many claim that frequencies above 20kHz effect how you or what you hear in the audio range, no one has been able to prove this to date. That does not mean it is not true, it just means that every attempt to prove it that has been published was inconclusive (including a very detailed experiment whose findings were just presented at the AES last month in NYC).

Next: Sample rate conversion does not change the absolute level. The peak levels will vary but the RMS values will stay the same. The peak levels of a waveform sample at its halfpower points will be 1.414x higher when passed through a reconstruction filter. This is correct and represents an undistorted signal. The problem comes when someone normalizes the halfpower point to full scale. That is engineering error and poor use of gain stage management, not an error in the design of the digital mixer.

It is also a myth that sample rate conversion from 88.2 to 44.1 is better than 96 to 44.1 or any other combination for that matter. Sample rate conversion is performed by resampling to a very high sample rate then resampling again at a lower sample rate or by convolution of a time domain response curve at the target sample rate. They dont just take every other sample.

Kurt was just about the only person on this thread that spoke complete truth.Analog probably is not as accurate as digital but some people (especially Kurt) like the sound of analog better anyway. Which brings up a good point, despite all the scientific mumbo jumbo I talked about above, none of us have any idea how well any of the manufactures implement any of this "theory" you really need to trust your ears and experiment. the physics is never as simple as we describe it and it certainly is never implemented as well as it should be.

Steve

mjones4th Fri, 11/14/2003 - 19:49

Originally posted by sdevino:
Spenglar is right, A well designed digital system does not lose anything between the samples that is within the bandwidth of the system. It has to do with resonance and bandlimited response of sinx/x impulse reponses applied to a perfect brickwall filter. So a 44.1 kHz sample rate will capture EXACTLy the same data as a 96kHz sample rate if they are bothe preceeded by the same 22kHz low pass filter.

No way.

According to that, If I brickwall lowpass a wave at 500Hz, and digitally record it at 2kHz, it will contain the same information as if I digitally recorded it at 44.1khz. No way.

By resonance I assume you mean the interplay between those discrete samples when they reach our ears?

My ears tell me that my delta66 converters sound better at 96k than at 44.1k. The theory of wave propagation, combined with digital representations of said waves, tells me that the higher the frequency of of digital representation, the closer you get to the original wave.

To simplify matters, I used the analogy that the information between samples is lost. In theory, what really happens is a second of audio is represented by 44100 stairs, or 96000 stairs, etc. Each sample takes 1/44100th of a second to play. In that time the original analog information has done many things, (changing intensity, frequency makeup, etc.) but the digital representation of that 1/44100th of a second of analog information is static for that lenght of time.

So Imagine a smooth sine wave a second long. Now divide it up into 44100 regions. From the beginning of each region to its end, draw a horizontal line capturing the average level of that region. That is what a digital representation looks like. A bunch of stairs going up and down, up and down....

Now if we divide those regions in half to create 88200 regions, then we get closer to the original waveform. we get smaller steps at closer intervals. Divide them in half again.... etc.

But we will ALWAYS have steps. Always. That means the digital waveform, although a very accurate representation of the analog original, is still imprecise and lossy. Furthermore, the higher the sample rate, the closer we get to the theoretical limit, the analog waveform.

Not trying to trump you or anything, Its just that I learned differently. I don't know the math behind it offhand, but I can go upstairs to the study and find it, easily. What I am doing is explaining the gist of the theory, as I learned it.

It is also a myth that sample rate conversion from 88.2 to 44.1 is better than 96 to 44.1 or any other combination for that matter. Sample rate conversion is performed by resampling to a very high sample rate then resampling again at a lower sample rate or by convolution of a time domain response curve at the target sample rate. They dont just take every other sample.

You are correct.

There are two ways to convert a sample rate, as you mention. the first is to essentially treat it as if were analog (target: infinitely high sample rate; best we can do: very high sample rate) and sample it.

The second is a what I call interpolation, which is more commonly used it timestretching. Synthesize each sample in the new waveform from those preceding and following it in time in the old waveform. the higher the quality of the interpolation, the more samples are analyzed to synthesize each new sample.

But either way, you still lose information.

I like to speak in english, it helps people to grasp the concept, the math is the easy part.

mitz

Randyman... Fri, 11/14/2003 - 20:28

I agree the digital info will be "Stepped" less and less with hi-res sample rates, but you need to look at the D/A'a analog output to really compare 2 digital signals as we hear them.

Yes, there is slight change in between a 20KHz wave in 1/44,100 of a second, but nothing the D/A won't be able to re-construct up to its Nyquest frequency. A 20Khz sine won't look any different on the output of the D/A in 44.1K or 96K. All the 96K will do is take 4 pictures of the 20K wave instead of 2. All the D/A needs is 2 samples to re-construct the wave accurately.

But then again, I don't know SHIT! Me beats things for fun! (I'm a drummer)...

Anyway, this is a 16 to 24 bit thread, not a hi-res sample rate thread. Sorry :(

Later :cool:

sdevino Sat, 11/15/2003 - 06:08

Originally posted by mitzelplik:

Originally posted by sdevino:
[qb] Spenglar is right, A well designed digital system does not lose anything between the samples that is within the bandwidth of the system. It has to do with resonance and bandlimited response of sinx/x impulse reponses applied to a perfect brickwall filter. So a 44.1 kHz sample rate will capture EXACTLy the same data as a 96kHz sample rate if they are bothe preceeded by the same 22kHz low pass filter.

No way.
Yes way. This is basic nyquist theory and except for the lack of perfect brickwall filters absolute fact. I have proven this to myself in many many DSP based audio analysers I have built over the years when I worked for Teradyne.

According to that, If I brickwall lowpass a wave at 500Hz, and digitally record it at 2kHz, it will contain the same information as if I digitally recorded it at 44.1khz. No way.

By resonance I assume you mean the interplay between those discrete samples when they reach our ears?

No I mean the resonance of the electrons passing through a band limited circuit. The little stair steps do not make it through the reconstruction filter because the steps represent very high frequency information which a band limited circuit cannot keep up with. The sampled data represents the analog data in terms of power and phase but stored in slightly different form. Passing through a brickwall filter causes the original time and phase data to be restored.

My ears tell me that my delta66 converters sound better at 96k than at 44.1k. The theory of wave propagation, combined with digital representations of said waves, tells me that the higher the frequency of of digital representation, the closer you get to the original wave.

The difference you hear has to do with the reconstruction filter implementation, not the sample rate. And NYquist theory says and has been proven many times that I can perfectly reconstruct a perfect sinewave from 2 samples plus sinx/x compensation. So if I sample a 20kHz signal at 40kHz, I will end up with mottly looking stepped waveform close to a square wave. When I pass that back through the reconstruction process I will have exactly the same sinewave I started with minus sinx/x rolloff (although this is typically compensated for in a well designed converter.

To simplify matters, I used the analogy that the information between samples is lost. In theory, what really happens is a second of audio is represented by 44100 stairs, or 96000 stairs, etc. Each sample takes 1/44100th of a second to play. In that time the original analog information has done many things, (changing intensity, frequency makeup, etc.) but the digital representation of that 1/44100th of a second of analog information is static for that lenght of time.

This is common misconception. Anything that happens between the little 1/44000th of a second steps is not audio. So NO audio is lost. The arguement over whther or not these higher frequencies are detectable is a separate discussion. It is also an area where there has been no successful attempt to prove that it matters or does not matter.

So Imagine a smooth sine wave a second long. Now divide it up into 44100 regions. From the beginning of each region to its end, draw a horizontal line capturing the average level of that region. That is what a digital representation looks like. A bunch of stairs going up and down, up and down....

Now if we divide those regions in half to create 88200 regions, then we get closer to the original waveform. we get smaller steps at closer intervals. Divide them in half again.... etc.

But we will ALWAYS have steps. Always. That means the digital waveform, although a very accurate representation of the analog original, is still imprecise and lossy. Furthermore, the higher the sample rate, the closer we get to the theoretical limit, the analog waveform.

All your descriptions are correct, except for the lossy part. It is only lossy if you conclude that infinite bandwidth is needed for audio reproduction. Those little stair steps represent the sample frequency with modulation bands on either side at the sample source frequency. They do not contain information that was part of the audio. If you look at the frequency domain of a waveform sampled at 44.1kHz you will see the origina waveform data and a 44.1kHz signal with upper and lower sidebands of the same source signal.

If you sample at 88kHz or 192kHz the DC to 20kHz portion of the spectrum is IDENTICAL i.e 100% the same. The only thing that is different is there is a modulated 88kHz or 192kHz component rather than a modulated 44.1kHz component. Last time I checked 44.1kHz is no more audio than 88kHz or 192kHz. Again the arguement over whether super audio is important is a separate discussion.

Not trying to trump you or anything, Its just that I learned differently. I don't know the math behind it offhand, but I can go upstairs to the study and find it, easily. What I am doing is explaining the gist of the theory, as I learned it.

No trump taken. You need to pay attention to in-band and out of band components. Do you know that digital video only samples at 4x the color subcarrier? And you do not see little stairsteps in the reconstructed waveforms AND no color information is lost between samples. You also need to think of stored digitzed data as samples needed to represent power and phase. The ability to capture phase accurately is far more dependent on the precicion of the conversion and the stability of the sample period (i.e. stable worclock).

I can recommend an excellent text for you:
"The FFT Fundamentals and Concepts" by Robert Ramirez published by Prentice Hall in 1985 (that's how long I have been working with digital audio and measurement systems).

And I can recommend a website with some basic reading:
A" rel="nofollow">http://www-personal.engin.umich.edu/~jglettle/section/digitalaudio/intro.htmlA

And some excellent reading from the master of digital audio theory, Stanley Lipshitz published by the AES:
http://www.aes.org/events/111/workshops/W1.cfm

I am probably not as smart as any of these guys but I am pretty sure that they know what they are talking about.

anonymous Sat, 11/15/2003 - 07:40

Originally posted by sdevino:

Spenglar is right, A well designed digital system does not lose anything between the samples that is within the bandwidth of the system. It has to do with resonance and bandlimited response of sinx/x impulse reponses applied to a perfect brickwall filter. So a 44.1 kHz sample rate will capture EXACTLy the same data as a 96kHz sample rate if they are bothe preceeded by the same 22kHz low pass filter.

Okay, but this is in a "well designed system", and most amateur level soundcards will vary drastically in data converting/capturing performance between settings. Right?

Additionally, if I convert a stereo recording from 44.1khz to 96khz within logic and then change the operating samplerate of the project accordingly - my uad-1 plugins sound better even though there aren't any higher signals existent in the wav than 44.1khz provides. Is this phenomena simply because my soundcard is not "well designed"? Strictly based on your explanation between 44.1 and 96 I would assume that if my soundcard were better logic running at 44.1 with a 44.1 wav, with a few dynamics plugins ontop should sound the same as: Logic running at 96khz with the same 44.1khz wav now converted to 96, then with the same plugins ontop.

P.S.
What would you say is the best bang per buck word-clock Steve?
Could an rme multiface improve considerably with a good external clock?

Thanks,

sdevino Sun, 11/16/2003 - 06:11

[QUOTE]Originally posted by danv1983:
Okay, but this is in a "well designed system", and most amateur level soundcards will vary drastically in data converting/capturing performance between settings. Right?

Some are better than others. You need to listen and decide for yourself. there have been consumer products that were better than some pro products. It depends a lot on what you are trying to do. Pro products tend to come in 2 flavors: really versatile (i.e. lots of headroom, durable construction, high quality power supply etc) or "special application", like a fairchild compressor. It has a great sound for certain applications but is definetly the wrong sound in others.


Additionally, if I convert a stereo recording from 44.1khz to 96khz within logic and then change the operating samplerate of the project accordingly - my uad-1 plugins sound better even though there aren't any higher signals existent in the wav than 44.1khz provides. Is this phenomena simply because my soundcard is not "well designed"? Strictly based on your explanation between 44.1 and 96 I would assume that if my soundcard were better logic running at 44.1 with a 44.1 wav, with a few dynamics plugins ontop should sound the same as: Logic running at 96khz with the same 44.1khz wav now converted to 96, then with the same plugins ontop.

You are dealing with a lot of variables here. I would suspect the plugins do better at higher sample rates. It IS possible that your card has a better antialias filter for 96 than for 44 but I really have no way to no that. It could be as simple as a crosstalk problem on the card or with something else in the computer.


P.S.
What would you say is the best bang per buck word-clock Steve?
Could an rme multiface improve considerably with a good external clock?

RME makes some pretty good stuff. If the RME word clock is not "phase locked" (uses a PLL), than youcan probably improve its performance with an external word clock. Newer converter products are tending to have much better wordclock performance than they did 2 or 3 years ago.

Steve

mjones4th Thu, 11/20/2003 - 19:22

Hey Steve, you may have just given me a topic for my doctor's Thesis, when I get there.

Everything I learned in my studies, which, by the way were not audio specific, lead me to conclusions different than yours. I don't know if you're right or not, but Little Miss Morgan Taylor is not willing to give me the time to find out.

My interest has definitely been peaked tho, and thanks for that.

mitz

sdevino Thu, 11/20/2003 - 20:04

Hi Mitz,
I am glad I helped. I am reporting not only what I have read, but also what I have proven to my self over the past 20 years doing DSP applications engineering. So I have actually had to apply sampling theory to real commercial products used to test many of the ICs used in modern audio equipment.

Read Stanley Liptshitz papers and tutorials when you get a chance. Focus hard on bandwidth and it will all come together.

Good Luck!

falkon2 Sun, 11/23/2003 - 18:02

You know, a crazy idea that occured to me a few days ago is the fact that digital plugins perform their algorithms upon the samples themselves as mathematical equations, whereas when these samples are played back, we are hearing an analog reconstruction of everything within the sampling rate's bandwidth (Assuming a good D/A that does it's job well, of course)

Take a 21kHz signal represented on 44kHz for example. The sampling points representing this signal would appear to be a 22kHz signal oscillating in amplitude according to another sine wave. ((sin a + b) + (sin a - b) = (sina) x (sinb), where sin is equivalent to cos. In this example, 'a' would be 22kHz and 'b' would be 1kHz). Granted, when the signal is played back through a good D/A, everything is brickwalled and the sin a + b signal is filtered out, and we get our original pure signal of 21kHz.

That's all fine and dandy, but what about digital processing that relies on the amplitude of a "signal"? These plugins are going to view the samples closer to silence as lower amplitude, rather than part of a constant-amplitude signal.

In our example above, this could mean that a compressor with low attack and release times is going to bring down the volume of certain samples and not others, though ALL the samples represent a sine wave that is constant in volume. This will just invariably add in lots of weird harmonics and not give the intended result of the compressor.

Of course, what I gave was quite an extreme example, but the fact remains that when plugins work on SAMPLES rather than SIGNALS, all types of weird artifacts can appear.

The workaround to that would be if plugins oversampled the track before working on it, then downsampled on the output. However, this is highly inefficient.

The best solution would be to record at high sample rate, perform mixing, then downsample. Of course, whether the additional hard disk space, CPU usage, and general headache is worth eliminating something that might be considered "imperceptible" is still open for debate.

ghellquist Sun, 01/09/2005 - 22:37

Back to 24 vs 16 bit.

CD uses 16bit. This is plenty enough for most contemporary music. After all, almost everything is compressed to sound good on the radio and has a very limited dynamic range. It also works rather good also for "difficult" music, as example classical acoustical, when used carefully.

I find it useful to consider the number of bits as the available Signal-to-Noise ratio. To keep it short and rounded, each bit is about 6dB of SN. So 16bit i about 96dB, 24 bits is 144dB. If your music stays on about -10dB all the time you still have 86dB of SN, in practice the hiss cannot be heard. If you record music with a dynamic content of 50dB, the noise will be heard in the softests parts of the music.

The main practical reason for me tracking at 24bits is because it allows me to be lazy with the levels when recording. When I try to record at 16bits I have to very careful about the levels, too high and it clips which sounds horrible, too low and the softer parts of the music disappears into the noise floor. With 24bits, I can simply set levels so that strongest music is at about -15dB without problems.

24bits generally is a bit overkill when tracking. I would say that 20bits should be enough for almost all applications outside the esoteric ones. And if you look at real world sound cards, they do not give you true 24 bit performance when you take into account the internal analog parts of the card. It is more like 110dB SN in the very best cards, ie around 20bits.

Next thing is that inside the DAW you will need more bits doing the internal calculations. Errors introduced at every calculation tends to accumulate, so you want each one to be small. This is a discussed in a number of other threads.

Gunnar.

anonymous Mon, 01/10/2005 - 08:08

Mundox wrote: OK.This is a little weird and I should know this, but I really am not sure. :confused:

This should pretty much answer your question, albeit in an extremely roundabout way which may take a very long time to undertsnad, but once you understand it, you'll have an understanding of quantization way beyond what recording engineers know about. Its an article written by Robert Gray of Stanford University who is not only one of the leading experts on quantization research in the world, but he is also a fantastic writer to boot and he can make even the complicated sound simple.

Read this:

Link removed

This is truly one of the finest 'survey articles' (read: a "laymens article") ever written on the subject of quantization!

It is 63 pages long so you should probably print it out. It was originally published in the October 1998 edition of "Transactions on Information Theory" which was the special Anniversary edition of Claude Shannons landmark Oct. 1948 paper.... so basically it had to be a fantastic article to be included in that Special Edition.

The article is pretty broad, so what your looking for is to cover scalar uniform quantization, and read the parts on quantization error bounds for various bitrates (aka "wordlengths").

However, by laymens article.... I mean "laymen" in terms of a person who is already well versed in electrical engineering and information theory with the knowledge equivalent of a masters degree..... but in any case.... if you work through this article for a few months, most of it could eventually be understand by somebody whether they have a degree or not in engineering.

x

User login