Discussion in 'Recording' started by audiokid, Oct 22, 2012.
Awesome. That really brings things to light. I always dither on mixdown anyway but this justifies it for me.
Doesn't it! Thanks for the "like" Hueseph.
And another reason why I took the hybrid leap and master to a second recorder now. I avoid SRC like the plague.
I always record at 24 bit, but my DAW (Cubase) handles files internally at 32 bit float. I usually export the stereo mix at 24 bit and leave the SRC/dither to the mastering engineer. Interesting question though... if my DAW handles files internally at 32 bit float, but I have it set to record at 24 bit, do I need to dither if I am exporting a 24 bit file?
Jeff, you are summing OTB right? So what do you do with the 2-bus, track back to the DAW , same sample rate as the session yes? or how are you doing it all?
Hi Chris... no, I mix & sum ITB (at least for now).
I thought you were using an analog console?
no, I use a Tascam DM 3200 digital board. I've only had it a few months and haven't updated the equipment page on my website to reflect the change, but my previous board was digital also, it was my current board's predecessor the Tascam DM-24. Both are awesome boards, I only upgraded to remain current since the old DM-24 is no longer supported. Sold it on eBay.
no. 32 bit is the internal processing speed of the DAW. 24 bit refers to the bit depth of the sampling as in 24/96.
good move to leave the SRC to the mastering stage. usually mastering houses can try different ways to do it and pick the best one for the recording on an individual basis. it's best to leave as much wiggle room for the ME as possible for the best results.
Gotcha! Tascam DM 3200, nice desk! I looked at your website a while back (nice BTW) and never looked closer, I saw the older Tascam and assumed it was analog.
Let me say this. Since I stopped goofing around with bouncing, other than MP4 which is also created on the 2-bus capture DAW/ my final mixes are accurate first take. They are exactly what I hear at full track session on the tracking DAW. In fact, the final mix always sounds better because they are in the analog stage at this point (sonic heaven) looking for a new home
But the monitoring system is also to credit because I hear the final mix coming out the other side of the capture DAW. I am able to hear everything at every stage of the session with a switch. So all this just makes everything spot on. There is no guessing anywhere and no SRC.
Bouncing on the same DAW is crossed off my list but I'm also OTB at that point so its a no brainer.
Lately I'm wondering how important recording anything above 24/44.1 is but I still do the 24/88.2 or 96k dance. Once I got great converters, higher SR seems less important.
So not only does the system sound like silk, its also very proficient because the computers use less CPU.. At this point of the game, Hybrid rocks for many reasons.
44.1 has a brick wall at about 20kHz.
88.1 brickwalls at about 44kHz.
if you can't hear above 15k 44.1 will sound ok to you but really there's stuff going on to at least 25k that affects lower harmonics at 18 /19k ... so the thought is the higher sampling is needed to preserve those interactions. but of course the benefits of higher sampling rates is lost on those of us who are long in the tooth and have lost a bit of our abilities to hear above 15kHz.
mmmmmph! ghaaaaaaaa! suicideahhhhhhzh*t!
And you had to mention that lol. Not only long in the tooth but short on the memory. I forgot about that. Its been a while since I did some experiments and they were done on the FF800 which have been replaced. Thanks for the jolt.
Very interesting dither discussion....
I have a question though.
I currently use Cubase 6.5 with a A&H ZEDR16 mixer. Cubase is set to record 32bit float and I've always used a SR of 44.1. Maybe 88.2 would be better IDK..maybe someone has an opinion for me...
After I have a good mix of tracks from Cubase to the ZED "analog"...I print the 2 track master back into Cubase within the same project.
So this would still be 32bit float and SR of 44.1
So I'm wondering where and when I should apply dither?
Would it be inserted on the master tracks when exporting to MP3?
Another question I have regarding using the dither plugin.
Should the dither be set to 16bit or 24bit. Cubase has the Apogee UV22 mastering dither plugin with different options, but I've always just used 24bit HI setting...maybe someone knows the proper settings to use with this...
I cannot think of a better man than Boswell to explain this. Maybe he'll chime in from the UK tonight. And for those who dare, McEase posted an astonishing thread on jitter here: http://recording.org/diy-pro-audio-forum/45012-what-is-clock-jitter.html
I've always recorded at 24/ 88.2 if I was SRC on the same DAW. Math wise, it makes more sense. But after reading a more on this topic (and doing what I do now), SRC on the same DAW has its setbacks, thus, why some of us don't go there anymore. So, for those bouncing on the same box, I question doing SRC and therefore, might just stay at 44.1 to avoid that altogether. Maybe someone can elaborate on that.
I use Sequoia 12, and have the Dither on. If I am bouncing down, its does it automatically. It all sounds so good that I never think about it anymore.
The video sure make you think about it.
There are quite a few points here.
Firstly, input anti-aliaising filters these days are done digitally inside the audio ADC chips, which are themselves clocked at x64, x128 or even x256 times the target sampling rate and simple RC-filtered at the higher rate. The internal filter is not a simple digital low-pass filter, but is an integral part of the internal bit processing and noise shaping. Incidentally, the filter in almost all of the available audio ADCs has a 3dB point at 0.454 of the sampling rate, which equates to 20KHz at 44.1KHz sampling, 21.8KHz at 48KHz sampling, 40KHz at 88.2KHz sampling and 43.6KHz at 96KHz sampling rates.
Secondly, the use of dither. This should be used only where there is a reduction in the precision of the data. For example, when generating a CD master from your DAW, you are usually going from 24-bit data to 16-bit, and should always involve dithering after truncation. However, I would not dither if going from 32-bit float (e.g. 25-bit mantissa plus 7 bit exponent) to 24-bit fixed-point inside a workstation, providing I was generating levels that were within (say) 10dB of FS.
Thirdly, human hearing limit. The thing about hearing tests showing that you can hear only to a certain number of KHz is that they are performed under pseudo steady-state conditions using sinewaves, and don't take any account of the hearing system's response to transients and non-sinusoidal waveforms. I have a theory that these last two not only go up to higher frequencies but also fall off less rapidly with age. My last audiometric tests gave a figure for my ears of around 15KHz, yet I can easily tell the difference between a 7KHz sinewave and a 7KHz squarewave, where all the differences between these two are at 21KHz and higher. I can also tell the difference between the same transient waveform sampled at 44.1KHz and 96KHz, particularly where the source is something like a pair of Tingsha bells. Spectral analysis of the 96KHz waveform of this type of bell shows energy going up to 30KHz, with the limit probably being due to the microphone and not the bell. If I can hear these differences on a single demonstration sound source, I argue that they must also be present when my ears are presented with a complex dynamic source such as an orchestra.
I've posted before in these forums about the improvement in quality of a 2-bus mix when using high-rate multitrack sources and re-sampling the 2-bus at 44.1KHz as opposed to using 44.1KHz sources and mixing at that rate. The effect of mixing many standard-bandwidth channels is to re-inforce phase and bandwidth deficiencies that fall within the human auditory response, and by "response" I don't mean the single-number figure measured by an audiometer.
Separate names with a comma.