Skip to main content

I do the mastering of my home projects in wavelab. how is the converters in it? should i record in 48k and dither down at the end, or should I stick to 44.1?

Comments

JoeH Mon, 11/08/2004 - 14:02

WOW; what an amazing round of facts, opinions, etc. and no one got ugly about it! ;-)

Seriously, I'm highly impressed & happy to read what has been posted on this subject here, but forgive me if I go back to the original question....(48 vs. 44?)

I have personally found this to be more than usuable for my own CD projects: Record multitrack at 24/44 (or 32 bit float/44), and mix to 24/44 for the final two-track edits & pre-mastering. Then, and ONLY then, do I dither down to a 16/44 CD for myself (inhouse) or the replication facility. (I avoid all SRC this way, and the dithering happens only once - at the last step. I use Samplitude/Sequioa for this, from the moment it's in the computer to the final mastering/CD burn.)

Most importantly, I agree with Cucco that the difference between 48 and 44 (at least in terms of making the final CD) is negligable, and not worth the extra hassle and SCR. (At least, I THINK that's what he said, and I THINK SCR is what touched off the firestorm of math-related posts thereafter.) Personally, I have NEVER heard an appreciable difference between 44 and 48; and when all things are TRULY set up equally, there is, IMHO, no real-world difference.

Regardless, it was a great round of posts, guys. Thanks for the info!

The world of 48K audio-for-video, of course is another story, and best left to another topic. Same for DVD-A, eh?

mark4man Sat, 11/27/2004 - 09:28

Sork,

I just (after reading this thread) beat the living hell out of my PC tower...& WaveLab is still functioning. DRAT !!! (my hard drives sound kind of funky now, tho...a sort of "grinding" sound.)

my 2 cents?:

1) Try the ReSample 192 plug (@ "High" quality.)
2) No good? Recook the mix. But some of the suggestions here are based on real-time recapture. That won't work for you if your AI can't handle multiple SR's. What's the gear chain look like?

Ed,

When the brains at MWB agree that SRC increases wordlength during the calcs...are they talking about software or outboard (or both)? It's all DSP (in both instances, correct)? Isn't that the essence? My money's ridin' on Diggins & Dempsey.

BTW - Nice guitar stuff !!! I'm treating myself to an E.L. CD for Christmas (only because...with my relatives, it would take too long to explain how/where to be obtained for a wish list; & all that. Some of 'em are still swingin' the banjo off the bridge, so to speak.)

Who provides your web hosting? My *.ra files take forever to load...& the support children @ my hosting service keep telling me everything is just fine.

Good thread.

mark4man
http://www.moonjams.com

BTW (Ed) - No "self-promotion" intended here, but I think you may enjoy the lead on "Visitor #3" @ the above url.

anonymous Sat, 11/27/2004 - 12:25

Hi Mark,
thanks for your support I hope you enjoy wichever cd you get.

I have a quartet, a duet (acoustic git & Jazz vocals), & Solo acoustic cd on deck, & a new cd i'm going to record(surf/Jazz)in a month or so. Hopfully I can get it together & put them on my site soon.

Your solo rocks..nice jam

To answer your question:
I think they mean both.
Ed

ghellquist Tue, 11/30/2004 - 10:08

Just to confuse matters even worse, there is a simplified way to look at dithering.

The idea is to look at S/N, that is the Signal Noise Ratio. The formula in general is that
S/N ~ 6.02 x "number of bits" + 2 dB.
(The last part, the 2 dB is from memory lane somewhere, I cannot anymore derive it from sample theory).
This assumes that no dithering is done.

So, in theory, a 16 signal has about 98dB S/N.

When you dither you add "noise", albeit a very specific kind of noise that the ear likes. When you use a simple triangular dither, the formula changes into something like
S/N ~ 6.02 x "number of bits" - 3 dB
(note the minus).
(Again the 3dB figure is from memory lane).

So in effect, a dithered 16 bit signal has about 93 dB S/N. But the important thing is that this dithering noise "pushes" artifacts away from where the ears hears things very well, to other areas where the ear (actually the brain) is less sensitive. The ear will hear a more pleasing signal, although it from a pure measurement point will have more noise. Go figure.

From this point of view, there is no real wish from anyone to dither several times as it degrades S/N.

Quite another thing is that if you write a SRC algorithm, you would probably go up a bit in internal wordlength inside the calculations. If for example you start with a 16bit signal, then do a lot of mathematical operations on that signal, you will introduce a number of rounding errors. These errors can be "pushed" outside the 16 bits of signal if you internally use a longer wordlength, say 24 or even 32 bits. An alternative is to use floating point, but you will still need to be careful with the rounding errors.

After all the internal operations, I would not expect to do any dithering. After all, once you have a clean signal, why add noise? The only case where you would want to add noise is as a very last step to help the ear hear a more pleasing signal.

And at the end of the day, the only judge is the ears. If it sounds good to you, it is good. If it sounds bad to you, well, then it is bad.

Gunnar

Cucco Tue, 11/30/2004 - 12:21

Okay Gunnar, now you're just trying to cause trouble :wink: (just kidding)

Your calculations are more or less dead on. Of course, the real question is - have you (or me, or anyone for that matter) heard a commercial release recording that acheives a 96 dB dynamic range? Even some of the finest classical recordings reach levels of maybe 60-70 or so dB dynamic range.

However, bit depth doesn't just equal S/N ratio. It also allows a smoother transition between individual volumes (as converted from analog voltage). But, bear in mind, where bit depth has nothing to do with frequency, in many cases, sampling rate has nothing to do with amplitude. It's a standard bar graph where x=time, or frequency(sampling) and y=amplitude (or bit depth). Most quality converters will employ both SRC and Bit reduction (dithering) but will do it in discrete stages.

Thanks for the comments...
Jeremy

ghellquist Tue, 11/30/2004 - 12:39

>>Of course, the real question is - have you (or me, or anyone for that matter) heard a commercial release recording that acheives a 96 dB dynamic range? Even some of the finest classical recordings reach levels of maybe 60-70 or so dB dynamic range.

Me? Never. When I buy classical music, I often enough copies it into Samplitude and compresses out quite a bit of dynamic range and burns a new CD. Sound much better in the car.

This might even be a business idea, why not make dual layer DVD/CD-s with a "car version " on the CD layer and a home theater version on the DVD layer? I´m very pragmatic when it comes to sound and music.

When I record, I use 24bit because I am lazy. Set the levels, start the recording and come back when its finished. This works on symphony orchestra in 24bit, not quite so in 16 bit in my experience. (Generally I play the trombone part, that is why I go away).

Happy greetings

Gunnar

anonymous Tue, 11/30/2004 - 16:30

ghellquist wrote:
But the important thing is that this dithering noise "pushes" artifacts away from where the ears hears things very well, to other areas where the ear (actually the brain) is less sensitive. The ear will hear a more pleasing signal, although it from a pure measurement point will have more noise. Go figure.

Gunnar

That sounds like your describing noise shaping . Most dithering algorithms offer noise shaping too.
Ed

Michael Fossenkemper Tue, 11/30/2004 - 22:40

if you are just capturing, then 44.1 or 48 or 96k can all sound good, but if you are processing, then it's a different story, well kind of. I just mastered a project today and we did a test between 44.1, 48, 96k, and 192, capturing from 1/4" tape. I used the apogee rosetta 200. 44.1 sounded good but not as open as 48k. 96k sounded better and effortless compared to 44.1, 192k sounded about the same as 96k. but with processing added into the equation, 96k sounded much better than 44.1 and 48k. couldn't process 192 without an SRC so I left that out of the equation. Even with SRC at the end, 96k sounded far better than 44.1 or 48k.

As far as dynamic range, I wouldn't enjoy a mix with 96db of dynamic range, at least not anywhere outside my room. Even there I'd be hard pressed to hear the low parts at a reasonable level.

anonymous Wed, 12/01/2004 - 13:05

I pulled this off of Jay Frigoletto's site

http://www.promastering.com/pages/techtalk_mac/tt-5_mac.html#2

...the difference in the original use of the term "noise shaping" vs. the more recent, and now quite common use of the term, which perhaps is better described as psychoacoustically optimized dither - or for short, noise shaped dither. The original use of the term (apart from the dry textbook definition: "a circuit which subtracts out the average noise value of a signal to increase the signal to noise ratio of the system") was usually in reference to oversampling. As you increase the rate of oversampling, you can spread the spectrum of noise (quantization error) out to higher and higher frequencies, so that much of the noise lies above the 20-20K audio band and is therefore inaudible. With 96Khz sample rate, you can similarly spread your dither out, thus having much of the noise above the audible range. And add to that the newer use of the term, and you can apply what is essentiual an EQ curve to the dither to distribute even more of it into areas where it is less audible (as per psychacoustic principles) and you can squeeze quite a lot of performance out of a system - more theoretical performance in fact than can be matched by current real world converter design. In the professional world this is nice because we can use a lower bit rate (say 18 or 20 instead of 24) while preserving as much resolution as current D/A converters can deliver, thus increasing available recording time on the media and lower data transfer requirements thus possibly alowing more channels or other information to also be transmitted (e.g. text, graphics etc

Ed

ghellquist Wed, 12/01/2004 - 15:25

Ed,
you pulled out a text published in 1999. This is more than five years ago, and I would not even use my five year old computer for surfing on the net. A lot has happened since then as far as computing goes.

All high level AD and DA I have seen now uses quite heavy oversampling internally. My not very "audiophile" Motu 828mkII uses 64 for AD and 128 for DA times oversampling inside the chip.

So what relevance does your selection of text have to anything discussed here? Am I missing something obvious?

And, yes, as I read Michaels post, use your ears. If it sounds good it is good.

Gunnar

anonymous Wed, 12/01/2004 - 19:03

ghellquist wrote: So what relevance does your selection of text have to anything discussed here? Am I missing something obvious?

Gunnar

you said "dithering noise "pushes" artifacts away from where the ears hears things very well"

noise shaping does this not dithering. some well known mastering engineers still like flat dither opposed to noise shaped dither.
Plus, jay told me in a past thread (that i can't find)that noise shaping has been arround for decades for broadcast.

Does anyone know more about this?
Ed

Michael Fossenkemper Wed, 12/01/2004 - 23:05

I'm a little leary of all this noise shaping. I find that extream shaped noise alters the material, like UV22 or a good example is the Cranesong analog dither. If you use these two kinds of shaped dither, you can hear the material change very clearly. whether this is good or bad, I guess depends on your taste. I personally want my masters to change as little as possible. UV22 has been around for a long time, more than 10 years that i can remember, so a paper in 1999 is not that old when discussing the theory of noise shaping. Plus, i find that the level of dither is well below the typical noise floor of mixes. Hell, 1 tube mic pre in a mix delivers more noise than the worst dither. I have a selection of dithers that I always try on every master, just to see what it's going to do. I also have a spectrum meter calibrated specifially for dither so I can see the shape of the dither I use. UV22, cranesong, and pow-r 3 have some radical shapes to them. I tend to go towards the flatter shaped noise.

ghellquist Thu, 12/02/2004 - 12:17

mark4man wrote: Gunnar,

Why don't you use your frickin' shoes & kick yourself in the butt.

mark4man

Yep mark4man. I´ll do that if you explain to my why me asking a question should evoke this response from you. I guess part of it may be that I have an old education in this area, mostly gone out the window, which I am trying to hard to update. Another reason might be language barriers as English to me is a foreign language.

So if for any reason I did something wrong, please forgive me.

Gunnar.

ghellquist Thu, 12/02/2004 - 12:23

Ed Littman wrote: [quote=ghellquist]So what relevance does your selection of text have to anything discussed here? Am I missing something obvious?

Gunnar

you said "dithering noise "pushes" artifacts away from where the ears hears things very well"

noise shaping does this not dithering. some well known mastering engineers still like flat dither opposed to noise shaped dither.
Plus, jay told me in a past thread (that i can't find)that noise shaping has been arround for decades for broadcast.

Does anyone know more about this?
Ed

Ed, my apoligies to you. I got stuck on the last part about 18bit converters. I have read it once more, and it now makes sense to me. Guess I´ve been a bit slow.

Gunnar

anonymous Fri, 12/03/2004 - 15:59

What if then, I did my recordings at 48k, and recorded into wavelab with a digital cable (real time), with wavelab set at 44.1?

Getting back to this...I don't know exactly how WaveLab specifically would handle this, but I can think of one of three things that might happen. Either WaveLab will set itself to 48kHz when it detects the 48kHz signal, or it will record it thinking it's a 44.1kHz signal, so when it plays it back the pitch will be off, or it won't record it at all or will record it with a bunch of nasty artifacts.

However, by applying a shaped noise, similar to what is going on within the program material (and often derived from it), you fool the human ear into thinking it hears smooth transitions.

You wouldn't want to use noise derived from the program material. It should be random. You don't want it to be correlated to anything.

When recording at a sample rate that is a multiple of your final destination frequency, the Sample Rate Conversion that takes place is simply a removal of evenly spaced samples. For example, when recording at 88.2Khz and down-sampling to 44.1 Khz, you simply need to remove every other sample.

You don't want to do it that way. You'll get aliasing artifacts. You would need to filter out everything above the Nyquist frequency first, then remove every other sample. But most of today's sample rate converters do upconvert to a higher frequency first, then filter, then downsample.

You are referring to a converter that will multiply the sample rate so that whatever the source sample rate and the destination sample rate are, they are common denominators of some insanely large number and then they remove the remaining samples. If you do the math on this, you will find that this is problematic at best ( (44100*48000)/100=21,168,000 samples - this is the number at which these two sampling frequencies are common denominators.

It may seem like an insanely large number to us...up in the mHz range...but it's really no problem for a converter, and is how most sample rate and A/D-D/A converters work these days.

After all the internal operations, I would not expect to do any dithering. After all, once you have a clean signal, why add noise? The only case where you would want to add noise is as a very last step to help the ear hear a more pleasing signal.

Actually, you need to dither any time you go from one bit depth down to another. There are certain types of dither you don't want to do more than once, but you do need to dither before you truncate (and that point should be made as well...you're not dithering instead of truncating, you're dithering before you truncate, so the quantization noise caused by truncation is random rather than correlated to your signal, which would make it distortion).

However, bit depth doesn't just equal S/N ratio. It also allows a smoother transition between individual volumes (as converted from analog voltage).

Although it doesn't just equal S/N ratio, that's all it really translates to. You do have many more "steps", or smaller "transitions", at higher bit depths, but the only way that manifests itself is by pushing the noise floor down lower. All of that extra "resolution" only describes lower-level signals more accurately.

-Duardo

mark4man Fri, 12/10/2004 - 09:53

Gunnar,

Yep mark4man. I´ll do that if you explain to my why me asking a question should evoke this response from you.

Someone offers up a genuine response to an inquiry of yours...& you question their findings, yet without so much as a thank you. You impress me (&...I'm using the word "impress" in a different context here, mind you) as someone who holds a bit of knowledge & somehow thinks their expertise is unique above others...or can't be differed with.

Look at your very first response to one of my posts. You had a frickin' coronary because you thought I was misrepresenting what occurs during SRC. In fact, I was merely disagreeing with your statement (that theoretically, there is no difference between 88.2 to 44.1 SRC & 96 to 44.1 SRC) by offering a simple explanation as to the basic differences between synchronous & asynchronous SRC..."real" differences. And this was to another member who was trying to understand effective methods of use.

All of a sudden...I'm in leauge with the ominous spreading of "internet myths".

Lighten up. We're all learnin' 'round here, bub. None of us know it all...yourself included.

mark4man

BTW - Especially me. I'm known in some elite circles as the "digital dummy" :lol:

mark4man Sat, 12/11/2004 - 06:59

Sork (& Duardo),

Q: What if then, I did my recordings at 48k, and recorded into wavelab with a digital cable (real time), with wavelab set at 44.1?

A: I don't know exactly how WaveLab specifically would handle this, but I can think of one of three things that might happen. Either WaveLab will set itself to 48kHz when it detects the 48kHz signal, or it will record it thinking it's a 44.1kHz signal, so when it plays it back the pitch will be off, or it won't record it at all or will record it with a bunch of nasty artifacts.

I tried this once (with an audio patch 1st)...& it was #2. WaveLab can be setup to record at various SR's, but what it receives is dependent on the AI. If the sound card cannot handle multiple SR's simultaneously, WaveLab will record the 48 at 44.1; & then playback will be off pitch.

With the digital output/cable, you have sync in the data stream. You can setup WaveLab to record at an SR other than that being transmitted, but when you attempt to record, it detects the sync; & just stops (the transport goes blank.)

The only way to implement this particular idea is with a system-to-system bounce.

mark4man

ghellquist Sun, 12/12/2004 - 02:18

mark4man wrote:
Look at your very first response to one of my posts. You had a frickin' coronary because you thought I was misrepresenting what occurs during SRC.
Lighten up. We're all learnin' 'round here, bub. None of us know it all...yourself included.

mark4man

Please accept my apologies. I am again only trying to understand the object. And on the way a number of people has attacked me for even hinting that I do no take everything they say for granted.

Given that, mark4man, I think our meeting of questions was in another section of the forum, in Digital Audio Recording.

I´ll try to show exactly where I got upset, so we can perhaps meet on a better understanding in the future.

Your answer started with:

mark4man wrote:
88.2 to 44.1 produces no aliasing. The SRC simply throws out every other sample. This is integer-ratio (synchronous) SRC, where the target rate is always coincident (a direct multiple) with the original rate.

Now I understand you meant a simplification. As written it triggered me. And not very positive. I´ll tell you why.

88.2 to 44.1 produces no aliasing.
-- Well, any sampling does, and this is like any other sampling. It needs a filter (generally a steep lowpass filter to remove all things above 44.1/2 frequencys). Obviously you meant that the lowpass filtering (anti-aliasing) should be done as this was a simplification. If not please tell me so very clearly, because in that case we have a clear difference in view.

(synchronous) SRC:
-- in my book I always thought that asynchronous SRC (the "opposite" of synchronous) meant the following:

Asynchronous SRC
The incoming sample rate, Fsi, and outgoing sample rate, Fso, of the SRC process are independent (i.e., no shared master clock).

But in the SRC inside a DAW there is a common clock, it is only that is not easily described as a simple integer rate. Now I understand that the words are used differently in different contexts.

Hopefully this can clear the air. As I´ve stated several times, I am trying to understand the area.

Gunnar.

mark4man Sun, 12/12/2004 - 08:23

Gunnar,

1) Apology accepted (none necessary, anyway...to the likes of me.) In the grand & magnitudinous digital audio scheme of things, we're all Bozo's on this bus.

2) I don't believe that 2nd quote hails from me. All bets are off (on software SRC) inside the DAW, anyway.

3) My overall point on SRC was the "difference" between two methods. I assumed that, since an 88.2 signal obviously contains significant energy above Nyquist...it's a given that any who understand this realize low-pass comes first in the process. After that...(for high-end synchronous SRC's)...it's DECIMATION. Units like the Weiss SFC2 actually incorporate a fixed scheme for all conversion ratios, where the output is directly derived from the input (SR), (so they wind up with no filter modulation & no jitter.) I may be wrong in this assumption (&...had contacted Weiss for a clarification a while ago, with no reply as of yet)...but I don't see a need to radically oversample to a common denominator, as in asynchronous (which is where the majority of undue SRC signal artifacts originate from.)

If we don't agree on this...we'll just have to respectfully disagree, that's all.

Cheers,

mark4man

anonymous Mon, 12/13/2004 - 11:49

Wordlength INCREASES after SRC

Ed Littman wrote: src will expand the word legth (32 float etc).
when ever you reduce word length you dither (back down to 24 or 16bits)

Cucco wrote: WHAT?????????????
Sample Rate Conversion does no such thing!!!!!!!!
At no point, when you change the sample rate of a digital recording, do you change the bit depth.

However, I must stress again. NO, when you downsample, you DO NOT automatically go to 32 bit floating point.

There is some misinformation floating around in this post and I hope this will clear it up. Sample rate conversion ALWAYS increases the wordlength. If you SRC a 16 bit 44k file to 48k, you will create a 2448 file or a 3248 file (depending on the internal resolution of the software). All digital processes increase the wordlength. Gain, fades, EQ, reverb, SRC... EVERYTHING.

You can prove this to yourself with a simple test. First you need software with a bit meter, or you can roll your own with a scope. Wavelab has one, RME Digicheck has one. A bit meter identifies how many bits your file is by showing which bits are active--an essential tool in the studio because you may have a 24 bit file that's really only 16 bits!

Open the 16 bit file in Wavelab (or other) and with the bit meter verify that it is true 16 bit. Now, change the file's sample rate. In Wavelab, that's Process -> Convert Sample Rate...

Guess what you have on your bit meter now? A full 32 bits active!

This is exactly how it works in every DAW, from Pro Tools to SADiE, and hardware SRC. In a hardware SRC, such as the Weiss SFC2 which is 32 fixed/40 float, the output wordlength IS automatically dithered to 24 bit or less, because we can't send anything greater down AES/EBU lines.

Ben Godin wrote: hey Sork, the converter in WaveLab is not very good, i'll tell you that for a fact. I think its best to record and mix in 44.1, 24 or 32 bit, simply because if you only have WaveLab, and are doing your own mastering, you'll get much better quality without WaveLab's converter.

also, if your recording program allows you to export at 44.1 from a recorded 48, i would go ahead and do that, (like you record at 48 and you bounce at 44.1), otherwise stuck to recordnig at 44.1 with a high bitrate.

This is a little confusing as well. The first paragraph is sound advice, however, the next paragraph states exactly what you say to avoid in the first! You say, "avoid SRC in Wavelab" then you say "use SRC in Wavelab when you export." If you are exporting from one sample rate to another (such as when Pro Tools gives you this option during a Bounce to Disk), you are using the software's SRC to change the rate!

Hope this helps.

Cucco Wed, 12/15/2004 - 12:31

Re: Wordlength INCREASES after SRC

leonardkravitz wrote:
There is some misinformation floating around in this post and I hope this will clear it up. Sample rate conversion ALWAYS increases the wordlength. If you SRC a 16 bit 44k file to 48k, you will create a 2448 file or a 3248 file (depending on the internal resolution of the software). All digital processes increase the wordlength. Gain, fades, EQ, reverb, SRC... EVERYTHING.

You can prove this to yourself with a simple test. First you need software with a bit meter, or you can roll your own with a scope. Wavelab has one, RME Digicheck has one. A bit meter identifies how many bits your file is by showing which bits are active--an essential tool in the studio because you may have a 24 bit file that's really only 16 bits!

Open the 16 bit file in Wavelab (or other) and with the bit meter verify that it is true 16 bit. Now, change the file's sample rate. In Wavelab, that's Process -> Convert Sample Rate...

Guess what you have on your bit meter now? A full 32 bits active!

This is exactly how it works in every DAW, from Pro Tools to SADiE, and hardware SRC. In a hardware SRC, such as the Weiss SFC2 which is 32 fixed/40 float, the output wordlength IS automatically dithered to 24 bit or less, because we can't send anything greater down AES/EBU lines.

Andrew,

With all due respects - the information which you've provided is technically correct on many levels but has a fundamental flaw. These programs and some (and I definitely mean some) sample rate converters choose to increase bit depth during sample rate conversion. However, by no means is this required. It is quite possible to perform sample rate conversion with no regard to bit depth whatsoever. (We perform those types of conversion in voice recognition algorithms all the time without expanding bit depth. Can you imagine the headache if someone from Iraq is trying to send a voice string back to the US over a POTS line and in the conversion process, the file size doubles. Goodbye bandwidth - a more precious commodity than toilet paper and chocolate bars!)

Also, as you are probably quite aware, the bit meters in most DAWs are simply handy tools and by no means a scientific device. They simply measure the total dynamic range and once you expand beyond a certain range, then the meter begins to show beyond 16 bits. However, the greater bit depth not only allows a louder signal but smoother transitions between all amplitudes of signals regardless of overall loudness. This is due to the greater number of possibilities in representing analog voltage as a digital integer or bit string.

My point at the beginning and still here is that bit depth and sampling rate, while often altered or affected together, are ultimately independent of one another. How a manufacturer chooses to write their algorithms is a completely different story.

BTW...Andrew, have you seen that there is a new forum here on the BBS - a forum dedicated entirely to acoustic music including specifically classical music. With your background, you would be a welcome asset in that forum too. Check it out!

Respectfully,

Jeremy

Michael Fossenkemper Wed, 12/15/2004 - 23:44

I can't seem to find much info on the rosetta 200 and whether any bit depth is changing. all I have is a flow chart that show's dither before SRC. In fact they don't seem to offer anything about how it's done or what they are using. I like the way it sounds, but just out of curiosity i'd like to know what's going on. does anyone have any light to shed?

Sork Fri, 12/17/2004 - 07:51

The funny thing is how this thread has gone off-topic from my newb question and escalated to be about things I don't understand completely haha! I recorded my bands concert on a md-recorder, and used my optical cable to transfer the tracks to my computer. Since md is only 33.2kHz (true?), and set wavelab to capture at 44.1kHz wouldn't my playback be off pitch then? It isn't..

Cucco Fri, 12/17/2004 - 08:15

Sork wrote: The funny thing is how this thread has gone off-topic from my newb question and escalated to be about things I don't understand completely haha! I recorded my bands concert on a md-recorder, and used my optical cable to transfer the tracks to my computer. Since md is only 33.2kHz (true?), and set wavelab to capture at 44.1kHz wouldn't my playback be off pitch then? It isn't..

No kidding!

But, MD is capable of recording 44.1 - it is, however, and option I believe on many recorders to record at lower samples to maximize space. I would bet, if you didn't change any settings that you should be fine.

anonymous Mon, 01/03/2005 - 11:24

Bob Katz has a whole section about dither in his "Mastering Audio" book. According to him, dither is specific to bit depth reduction (ie 24 bit to 16 bit, ect.) and has nothing to do with sample rate conversion. Dither helps with the terrible artifacts that happen if you were to completely truncate those 8 bits (if going from 24 to 16 bits) without adding noise to the least significant bit (the 16th one). That is what is known as dithering. Sample rate conversion is a completely different animal. Izotope has a pretty good article on dither for download in their "Guides" section:

http://www.izotope.com/products/audio/ozone/OzoneDitheringGuide.pdf

Michael Fossenkemper Mon, 01/03/2005 - 17:59

Lucidwaves,
we know all this. What the debate is over whether or not SRC increases bit depth. from what I can gather, it depends on the unit or software. I think some increase the bit depth when appling filters and therefore would need bit reduction and therefore dither. I think others just "truncate" the higher freq's and don't increase the bit depth. I could be full of it but this is what i'm thinking. The more I think about it, the more I think that SRC'ing should increase the bith depth if you want it to sound nicer.

anonymous Mon, 01/03/2005 - 22:39

I think others just "truncate" the higher freq's and don't increase the bit depth.

When it comes to eliminating the higher frequencies, there's no "truncation" involved...they can't just be removed, they have to be filtered out. Otherwise you'd have aliasing. None of this has anything to do with bit depth. I'm not sure why filtering out frequencies should add any more bits...it's not affecting the dynamic range at all.

-Duardo

anonymous Wed, 01/05/2005 - 12:41

My apologies Michael. After re-reading the Wordlengths & Dither chapter in Bob Katz' "Mastering Audio" book I fully see what you are saying now and am changing my stance on this. I think you are more right than you know:

By its nature, all or at least most DSP calculations (definately including SRC - with all its filtering and what not) increase the wordlength. Its up to the dsp manufacturer to bring that wordlength back to 24 bit (or whatever the usable bit depth is) with or without dither. I would hope that most would apply dither but I wouldnt know for sure. Perhaps the reason that different SRCs sound good or bad (aside from the anti-alias filtering they use) is what type of noise shaping they use in their dither?

But wouldnt that relate to every DSP process they use that increases the wordlength as well? Not just the SRC. Any thoughts on this would be appreciated.

As for me, I'd suspect the filtering method used in a bad sounding SRC before I would suspect the dither, though both could potentialy be the problem.

Michael Fossenkemper Wed, 01/05/2005 - 19:25

I don't think which kind of dither really has that much effect compared to the types of filters and the bit depth in which the calculations are done. All the better processes use 40 or higher bit depths, IMO. Even if they used the worst dither I've ever heard, it shouldn't be as bad as some of these program's SRC. So now you compare 2 boxes that process at say 40 bits, why do they sound so different? The types of filters they use. If the filters are not good, you get that HF energy aliased back into the signal or you get filter ripple in the audible range. Maybe along with that, the bit depth isn't even dithered, just truncated. Who knows if there are even filters used.

x

User login