Which sampling rate do you most commonly use when recording?
Please don't include mix projects which come to you where the SR is set by the client's project/files...
I'm talking about when you begin recording a new project.
along with your vote, comments -like bit resolution choices - are also more than welcome.
;)
Comments
44/24 at the studio. 96 on systems I setup. My personal setup i
44/24 at the studio. 96 on systems I setup. My personal setup is whatever iPhones record at 44/16 I'm guessing lol. It was always interesting to me that Reason, the softwares latency for playing the Vsti went down as sample rate increased. I wonder how sample rate relates to latency.
I should confirm, I don't always track t 44.1 , but I definitely
I should confirm, I don't always track t 44.1 , but I definitely appreciate the sound of better converters "44.1 conversion" to others at lower SR. As an example, Lavry Blacks or Prism (two products I am familiar with and use) sound better @ 44.1 to an older RME FF800 at 44.1. So, I usually use these products tracking acoustic work on a laptop that I know runs better at lower SR. I use less cpu and HD space at 44.1. I trust my remote system better at 44.1 and love the sound of my conversion rate using good converters.
When at 96k, both are not so much noticeable. So, I choose Lavry or Prism to record sessions I am doing at 44.1. Better converters appear to sound better at lower SR.
As Bos has pointed out many times, better converters have better circuitry. Maybe he will chime in on this.
cross references:
audiokid, post: 429271, member: 1 wrote: just a cross reference:
http://recording.org/threads/adcs-what-is-important.53957/
http://recording.org/threads/what-is-clock-jitter.45012/
I generally use 48k because it's the standard for film/video and
I generally use 48k because it's the standard for film/video and I'd rather dumb it down to 44.1 for audio than upsample to 48 for video. Where I usually work 48 is the highest the converters go, but I'd still use 48 even if they went higher. Another reason I use 48 is that I have an Alesis HD24 and the clock is notoriously incorrect at 44.1.
Which Sample Rate do you most commonly use when recording? Hi @D
Which Sample Rate do you most commonly use when recording?
Hi DonnyThompson
I changed the Poll values to include multiples choices.
I track at both 96k and "44.1 or whatever the destination is for that matter" on the same pass .
which is another fantastic reason to incorporate the 2 DAW system. Track at high sample rate and capture the mixdown at the destination SR to avoid bouncing. Sounds good and is much easier to mix and compare finals at the "destination" SR.
I render ("bounce") at the project settings and leave all sample
I render ("bounce") at the project settings and leave all sample rate and word length conversions to the mastering phase. That gives me a pre-master that has not suffered any conversion processing at all. For each output format there's only one sample rate conversion and one dither/truncate process.
bouldersound, post: 429300, member: 38959 wrote: I render ("boun
bouldersound, post: 429300, member: 38959 wrote: I render ("bounce") at the project settings and leave all sample rate and word length conversions to the mastering phase. That gives me a pre-master that has not suffered any conversion processing at all. For each output format there's only one sample rate conversion and one dither/truncate process.
bouldersound
I'm not saying you should do what I do but, but just for conversation in regards to the OP and multi tracking at 44.1 to avoid bouncing
thatjeffguy has it right but I feel you could improve this.
The sound quality of multitracking at a higher SR and capturing in once pass does sound better over apposed to just at 44.1. Plus (I'm sure those who do this already know), there seems to be a benefit to mixing into the destination SR on that same pass to my ears. But, this is also assuming your are into hybrid and wanting better than just "round Trip" sound quality and benefits.
For those who say they let the ME do the rest... that may appear to be the obvious "standard". However, lets assume your ME doesn't have an uncoupled system and is taking your wonderful mix and bouncing it down. Most ME bounce . (n)
So, we are still subjecting our "master" to be bounced later. Which is then downgrading the mix we tried so hard to do right .
We could avoid this by tracking at higher SR and capturing the whatever destination SR in one pass, then pass that perfect mix onto the ME who wouldn't bounce our mix.
Two DAWs also provide a way to mix into a SR that is the "destination" SR, like ME should do in the first place. And why imho, they are able to hear a mix better and make better changes than we can on one DAW. There is something about working on a stereo mix in a separate DAW that just turns out better sounding.
To be convinced that mixing down live from the multitrack is sup
To be convinced that mixing down live from the multitrack is superior I would have to see conclusive evidence that a live stream from the multitrack project:
A. is different from the rendered file and
B. is objectively better than the rendered file (fewer errors or whatever) and
C. can be proven in double blind A/B/X testing to sound better to a significant portion of the population and
D. sounds unmistakeably better to me.
bouldersound, post: 429303, member: 38959 wrote: To be convinced
bouldersound, post: 429303, member: 38959 wrote: To be convinced that mixing down live from the multitrack is superior I would have to see conclusive evidence that a live stream from the multitrack project:
My above comments are hypothetically assuming we are tracking a multitrack (live or studio) at 48 to avoid bouncing. If so, I would without doubt track at 96 and recapture the mix on an uncouple system . To my ears and extensive testing... The sound quality under good conversion is without doubt better tracking @ higher sample rates and recapturing the mixdown over tracking at just 48 or 44.1 ;)
bouldersound, post: 429303, member: 38959 wrote: C. can be prove
bouldersound, post: 429303, member: 38959 wrote: C. can be proven in double blind A/B/X testing to sound better to a significant portion of the population and
D. sounds unmistakeably better to me.
If this was the only reasons why you do audio, then of course, who cares about half of this nonsene in Pro Audio.
I'd be happy with Pro Tools and playing around with plugins like the rest of the world.
If all we are measuring to is earbuds, then who cares right. Why even buy quality gear for that matter? And I mean that sincerely.
Assuming my workflow includes "hybrid".
There are more reason than the actual sound quality to this madness. Avoiding a lot of steps, saving time and money are some of them. I need very little gear and almost no extra software in comparison to investing in an HDX system. I am saving thousands to achieve a fast and excellent end product. Its also easier to compare mixdowns and learn cause and effects when you have multiple mixdowns on a seperate DAW like Mastering Enginners work. . But, I do agree its all subjective too.
The workflow of two DAW's makes faster finishes and if money is the ultimate deciding factor, two DAW's wins in speed in my studio.
If you aren't using analog "hybrid" or round trip processing, then none of this would even make a bit of sense. Its not even a topic you need to discuss. Track at whatever you like and have fun trying to win the rat race best you can.
For my stuff 32/44.1 Take a look at Pensado's place. I rarely
For my stuff 32/44.1 Take a look at Pensado's place. I rarely see his sample rate readout higher than 48. 44.1 alot of time.
I'm a believer that higher sample rates mattered more in the early days of digital. I remember reading somewhere that the sample rates are dependent on the resonance of the crystal and if it's a crystal that naturally resonates at 44.1 or 48 your better off using that sample rate.
That said I don't think I've ever heard a shootout of 44.1 to 96k ect that sounded exactly that same raw. There is definitely a slightly different sound happening. In the end you mixing will shape it to what you want.
Tech stuff aside it's also about plug in count. 96k is too resource hungry for me.
Rendering is 2-20 time faster than real time playback. That's a
Rendering is 2-20 time faster than real time playback. That's a lot of hours over the years. There would have to be more than a little sonic improvement to make it worth sitting around all that time waiting for the mix to play.
No, those two points (intentionally the last out of the four) are not my only reasons for doing audio, they are standards which have to be met before adding substantial amounts of time to my mixdown process. The A/B/X testing has nothing to do with earbuds or mp3s and everything to do with not getting caught up in confirmation bias.
Chris Perra, post: 429306, member: 48232 wrote: Tech stuff aside
Chris Perra, post: 429306, member: 48232 wrote: Tech stuff aside it's also about plug in count. 96k is too resource hungry for me.
Pensedo doesn't talk about 2 DAWs as far as i know ( I don't know may people who are actually aware of this, yet ;)
Dave is pretty locked into ITB and PT. Plus, a lot of us who where analog freaks now hear OTB is really pointless. I would never invest in analog mixing gear again.
I am a believer ITB is superior to a console but I do hear improvements in better SR, better converters and without doubt, avoiding bouncing!!! Thats why a lot of us use excellent converters and track at 44.1. So yes, thats why and how you get the best results on a single system.
Being said.
I'd rather avoid bouncing and track at 96 any-day of the week over 44.1. But that's because I hear a way around this apposed to just tracking at 44.1 and using lots of plugins and average conversion.
I "personally" hear the higher the sample rate (96k) and the less you use of both analog hardware and plug-ins the better. 96k sounds better, much better but not if you keep converting and bouncing and adding plugs etc etc etc..
The less I alter the original source (be it) samples or organic and mess with the phasing the better it sounds to me. And this always ends up sounding better as an MP3 too.
bouldersound, post: 429307, member: 38959 wrote: Rendering is 2-
bouldersound, post: 429307, member: 38959 wrote: Rendering is 2-20 time faster than real time playback. That's a lot of hours over the years. There would have to be more than a little sonic improvement to make it worth sitting around all that time waiting for the mix to play.
No, those two points (intentionally the last out of the four) are not my only reasons for doing audio, they are standards which have to be met before adding substantial amounts of time to my mixdown process. The A/B/X testing has nothing to do with earbuds or mp3s and everything to do with not getting caught up in confirmation bias.
I agree.
I'm also mixing for people so from that aspect, I am always having to study a mix that is full of changes from the last. Nothing is ever the same reference point. . This requires the need to get all sorts of issues solved really fast and well. I could never get what I do done on one system like you suggest. I could live with it, but I wouldn't mix for a business persay then. :)
I would most likely be saying exactly what you are to me. What a bunch of BS. ;) I get what you are saying, and respect it but there is more ways to sum and save money than just using a single DAW.
I can mix a session on a laptop, no extra plugs and no extra cards. So simple and cheap. Fast and full or improve that proficiency and sound quality even more. Which is all I'm sharing here. Less is more and two DAW's removes a whole lot of plugins and gear.
I track at 96 and sum at 44.1. Love it.
Enjoy...
I'm not 100% sure but I think Pensado has to do real time mixdow
I'm not 100% sure but I think Pensado has to do real time mixdowns as he has some hybrid stuff like a Bricasti and a Shadow Hills comp that are analog. I'm not sure how that integrates but I would imagine there's some kind of real time mixdown when and if he uses that stuff..
Chris Perra, post: 429311, member: 48232 wrote: I'm not 100% sur
Chris Perra, post: 429311, member: 48232 wrote: I'm not 100% sure but I think Pensado has to do real time mixdowns as he has some hybrid stuff like a Bricasti and a Shadow Hills comp that are analog. I'm not sure how that integrates but I would imagine there's some kind of real time mixdown when and if he uses that stuff..
I use the same gear and more or less. So, those who are thinking about spending a bunch of money because Dave Pensado does, gear its pretty meaningless to me :). Digital Audio is great and only getting better.
Maybe I'm going deaf lol.
I avoid round trip processing and because of that, I feel/ hear/ I need even less hardware and software including the Bricasti's and summing amps now. I do love twisting knobs and smelling gear warm up but will never go back to those days.
Dave round trips or uses the Bricasti in digital like most people do. Its convenient but imho, there are better options to round trip processing that saves time and a lot of extra money now.
95% of why I do what I do is about hearing cause and effect before and after the (ADDA) steps of 96k to 44.1.
I think mixing and mastering gear can all be emulated. Digital audio sounds great but I also think there is room for improvement when it comes to summing on one DAW. Which is why I break up the DAW's summing section into two steps. This process has saved me thousands and I personally think my mixes sound better and come together faster than they ever have so it excites me to share this here. I've gone from using thousands of dollars in gear and bloat to just a few pieces of gear now. I need very little extra software as well.
So, just relating to this thread, 96k summed at the end of the day still sounds better than tracking at 44.1 to me. Does it all matter on itunes. Not likely. Which (from an acoustical music POV) is why I no longer use mixing or mastering hardware and buy into all this extra software BS like HDX and so on.
I'm big on spending less cash, using less realestate, sound treatment and listening better on the 2-bus sections.
I'm not calling decoupling B.S., I'm just not convinced it's so
I'm not calling decoupling B.S., I'm just not convinced it's so much better that I should wait around the full 3-7 minutes every time I render a song. I get much more done when it takes 1 or 2 minutes to render a multitrack project and under 20 seconds to render a mastered song. I'm open to persuasion but it would need to meet the four standards listed above.
Here they are in reverse:
A. If the data is the same there is no benefit.
B. If the audio data is corrupted (and C and D are satisfied) then there has to be a better, more consistent, more rational way to get the result.
C. If people can't hear it there is no benefit.
D. If I can't hear it there is no benefit.
Chris Perra, post: 429316, member: 48232 wrote: I suppose you co
Chris Perra, post: 429316, member: 48232 wrote: I suppose you could compare the 2 versions, normalize them to the same peak volume level and phase cancel them too see how much of a difference there is.
Regarding SR. I'm sure two exact mixes (if it was even possible to do from two different studios) one being tracked at 96k and the other at 44.1 should sound different.
I mean, I can hear the sweetness and less noise always at a higher SR. Can't you? So I guess the question really is, does it matter?
I do know better converters sound better than others at lower SR. So, if my main tracking SR was 44.1 or 48, I would most likely invest in a really good converter. Or at least suggest it. Not all converters are equal. Not all SR end up sounding the same.
Are expensive converters and more sophisticated methods to mix worth it for most people. I doubt it. Most of our recordings will never be sold or be published. Most Pro Audio gear today is a personal reason to want character that people feel can't be achieved ITB.
itunes is where it goes.. Does our workflow give us the best results for mixes on itunes? If so, then you are doing exactly what you should be doing. Thats about as simple as this answer gets to me.
If we are comparing, and in a circle that is comparing our music with other engineers, then I would suggest taking more interest in how we sum a mix and master it.
Conversion topics and how people improve summing are really more beneficial to those really serious about the baby steps and improving workflows. Whats good for you may be completely useless to me.
regarding 32 float. I don't bother with 32 float. I'm always at
regarding 32 float. I don't bother with 32 float. I'm always at 24 bit on the multitrack. Do others hear a difference between 32 and 24?
I capture my DAW mix at either 16 or 24 bit. If I am making CD's, I will capture it @ 16, burn the CD and call it a day. I will also capture @ 24 bit and dither but I don't notice a difference worth comparing. Lately, I just dither it but wonder why I don't hear a difference between the two.
Floating point is great for the mix engine, not that beneficial
Floating point is great for the mix engine, not that beneficial for files.
I don't know about other software, but the project settings aren't that important in Vegas. Project settings only affect the preview. If you record in 48 you can still change the project to 44.1 and it will convert on the fly, and if you then render at 48 it ignores the project settings and produces a 48k file that has not suffered any SR conversion. Same with bit depth settings. You can record in 24 bit, play it in 16 and it can still render in 24 bit with no truncation.
bouldersound, post: 429307, member: 38959 wrote: Rendering is 2-
bouldersound, post: 429307, member: 38959 wrote: Rendering is 2-20 time faster than real time playback.
What is rendering audio?
I'm not familiar with that term used in audio. In video, I understand that to be, waiting for the image to "render" as its processing all the changes you've made. Then, if we are to want it to load faster, I would optimize it to a low bitrate and hope it shows all the colours I started out with. Sooner or later you still need to upload it to the web that will in turn compress it for you. So, you can either do it better before or let the machine do it for you.
How would you compare "rendering" to capturing a mixdown that needs to be optimized for itunes?
How does rendering improve a mixdown apposed to capturing at the destination SR which includes, avoiding a Mastering Engineer bouncing it down later?
I think recording at the highest practical sample rate you can m
I think recording at the highest practical sample rate you can makes sense any way you look at it. Unless your recording at the final sample/bit rate, your always converting in some fashion.
As far as dave pensado, I'll belive what I seen on the screen when I'm next to him or one of his assistants. His focused, sponsored tutorials, are demonstrations. And since pensado, primarily mixes pop, commercially, anything besides ITB in pt, makes no sense for an engineer like him.
Here's my theory. The more samples you capture, the closer you are to a linear capture.
All this conversion talk is going to be laughable in the future. Quote me. It makes no sense to convert any further away from the transducer than necessary. Mics, speakers, phones, all that stuff should just have the conversion built in, and be done with it. It can guarantee more consistent results around the table, taking variables like people's interfaces and conversions effecting the performance of a peice of gear or software.
Neumann has some digital mics, and with wiring and data communication protocalls, becoming increasingly less bulky, and wireless. Any conversion I purchase will be to pair with a particular preamp and or mic (burl is coming to the rack), and a really nice set of da to feed some speakers and amps. Stuff, that ideally, doesn't need a CPU to operate. I'm expecting 15 years of high performance, before the consumer quality catches up to the elite of the near future.
Excellent conversion is priced out of multitrack rigs, for most. So as channel count goes up, quality dwindles. Same as feature set goes up. An pcto pre is tough to beat at it perfomance price point. The Orion would be the 'full out' version of something like that.
I think paring the conversion more with the transducers or pre armps, is a way to allow a sort of consistency for a longer period of time. Conversion effects the sound in some way, and I don't think I would necessarily want new sounds out of my already established favorites, and also I don't like the idea of entire sets of conversion becoming useless.
I belive even in digital that buying at the top last quite long, and for and average professional is a smart choice, given the amount of high performance operation you get during the course of that time. A type of performance otherwise unachieveable, for a longer period of time, and a performance that stays relavnent for longer.
Top end converter from 10-15 years ago, sound as good or better than some new interfaces I've worked with that boast conversion a s a feature, and got the write ups favoring that 'feature'. the old upper step unit still as good or better.
There's also times when you buy something as is being phased out, either as a professional standard, or just the end of a line, or a corporate move, and digital becomes a huge rip off. That's why I belive looking at conversing as separate from the point of transduction(?) is playing into moore's law and the massive planned obeselenece inherent in technology based fields.
FWIW, I almost always track at 44.1/24... There have been times
FWIW, I almost always track at 44.1/24...
There have been times that I have used higher SR's at the request of a client, ( usually 48k) or tracks that come from a client who has recorded the tracks themselves at different SR's and they come to me for mixing, but generally I stick with 44.1 as my most common choice.
I've recorded at 96k, but about 90% of what I do for clients ends up on iTunes, so I'm not really convinced that your average listener, using ear buds or cheap PC speakers, would hear much difference - if any.
Sometimes I have to remind myself that I'm not mixing for my fellow audio engineers as being the listening majority - which is instead made up mostly of people who don't have any way to hear those subtle nuances I so often sweat over, and even if they did have a hi quality monitoring system, I'm still not sure they'd be able to hear the finer details.
I'm treating this current project differently, (or maybe I should say that I am treating it differently now) but... I'm still tracking at 44/24.
Looking back, I probably should have used 96k ... I just didn't know - at the time that I started the project - that it would turn into what it actually has. Neither of us did. We thought we'd upload a few tracks to iTunes, make a few CD copies for his friends and family; it was just supposed to be a simple, fun little project, and give me a chance to work with a close friend whom I hadn't worked with in years. We just didn't foresee the project getting more serious, or the level of musicianship/performances being as good as they've turned out to be, nor did we ever think that we would end up pre-selling 500 copies, either ...( I know this is nothing compared to what others have done, but considering the original intention of this project, we're pretty happy with those numbers). ;)
Anyway, I don't see how we would benefit by switching to 96k now for the last two songs on the album. I dunno... maybe I should -?- I'm willing to be convinced - either way. LOL
audiokid, post: 429325, member: 1 wrote: What is rendering audio
audiokid, post: 429325, member: 1 wrote: What is rendering audio?
I think - and I could be wrong - that he's using the word "render" synonymously with PT's "bounce to disk", or in Samp, the "export audio" command. It's not a different process, all of these mentioned involve the computer/DAW program mixing the project down ITB to a final stereo or mono mix ( .wav, mp3, etc...).
"Render" is just a different term for ITB "mixdown".
Unless of course, I'm misunderstanding him.
Yes, rendering, in PT terminology, means "bouncing to disk". In
Yes, rendering, in PT terminology, means "bouncing to disk". In some DAWs it's "export audio" or whatever. Sony Vegas is also a complete video editor so it uses "render". It's the normal, effective, convenient way to go from a multitrack project to a stereo file (or AVI etc.).
In the case of software I've been using for a decade it's always been what PT calls "offline" bouncing. Of course you could record your mix to a new track in real time but there never seemed to be any advantage.
A typical rock mix, given the usual track count and effects, renders in about half the playback time. A mastering project can take about 1/20th the playback time. A video project may take two or three times the playback time to render. There's no getting around waiting for the video, but I would need a strong reason to render audio projects in real time. So far I'm not convinced, but I can keep an open mind.
A rendered file has not gone through any conversions other than the DAW processing to produce a stereo digital stream which is saved as a file. This is what goes to mastering, a file that contains the maximum amount of original data. Reduction of the data should be the very last processes applied: convert sample rate then dither then truncate.
By going through two converters, coupled or not, you are adding noise and distortion. By giving the mastering engineer files at a lower sample rate or word length than the project you are handicapping him. Mixing to fewer bits raises the noise floor, which gets raised again if any gain is applied in mastering. It's possible all this sounds better, but it would be in spite of the objective facts, not because of them.
bouldersound, post: 429335, member: 38959 wrote: By going throug
bouldersound, post: 429335, member: 38959 wrote: By going through two converters, coupled or not, you are adding noise and distortion
I never notice this. Uncoupling or mixing down though my conversion paths always sounds excellent to me. In fact, I've done null tests and although I do detect change, its definitely not a bad thing. :)
bouldersound, post: 429335, member: 38959 wrote: By giving the mastering engineer files at a lower sample rate or word length than the project you are handicapping him
I would never give a file to anyone that wasn't already tracked at its destination SR. To my ears today, bouncing by anyone, including an ME is a bad move.
Questions I'd be asking: Does the ME bounce or not. Personally, I would never give a file to an ME that had to be bounced. I always capture or track at the SR it is destined to be.
bouldersound, post: 429335, member: 38959 wrote: A rendered file
bouldersound, post: 429335, member: 38959 wrote: A rendered file has not gone through any conversions other than the DAW processing to produce a stereo digital stream which is saved as a file.
Hmmm....I don't know... I mean, I understand that that's what's supposed to happen...
But I can remember back when I was using Sonar PE 8, that the "rendered" mixes always sounded slightly different to me than the original project file as it played in real time within the timeline... meaning, as I was mixing and listening within the project file, I could never seem to get Sonar to render it as sounding exactly the same as what I was hearing while mixing .
And, I wasn't dithering, up or down sampling, or doing anything other than to simply output the mix to a .wav file of the same SR and Bit res as that of the project settings.
But that mixed/rendered file, in comparison to the project audio playing real-time, always had a sort of "smear" to it that I never could figure out. I even called Cakewalk tech support about it at the time, and they said they had received similar complaints about that particular issue and they were "looking into it", although they were leaning towards blaming Windows for it (no great shock there, everyone always likes to blame everyone else).
I don't know if they ever did end up doing anything about it or not - by then I'd had a chance to try Samplitude, and it worked flawlessly on my system... what I mixed is what I heard when I opened the mixed stereo file. There were no surprises.
So, I moved over to Samp permanently. It became a no-brainer choice, at least for me.
So yes, in theory, there's not supposed to be "anything going on" during that process - other than summing and exporting the audio as your final mix /destination file of choice ... but are we 100% positive that there isn't something else that is perhaps going on during that ITB process? And if so, couldn't the audio engine of the program be at fault? All audio engines/DAW's are not the same... they can be developed and coded differently from platform to platform.
I'm not trying to be a smart ass, or even contrary about this...
I'm just curious, based on past personal experience.
d.
audiokid, post: 429354, member: 1 wrote: I never notice this. U
audiokid, post: 429354, member: 1 wrote: I never notice this. Uncoupling or mixing down though my conversion paths always sounds excellent to me. In fact, I've done null tests and although I do detect change, its definitely not a bad thing. :)
And yet objectively it's a degradation of the audio. That's fine, but it would be better if the cause was quantified so it can be applied in a rational way rather than going analog and back to get it.
audiokid, post: 429354, member: 1 wrote: I would never give a file to anyone that wasn't already tracked at its destination SR. To my ears today, bouncing by anyone, including an ME is a bad move.
Questions I'd be asking: Does the ME bounce or not. Personally, I would never give a file to an ME that had to be bounced. I always capture or track at the SR it is destined to be.
I would have more confidence a good ME than I would in either you or myself. As I understand it, it's best to do all the processing with the most samples and the lowest noise floor, then convert it to the final format. That is the ordinary method. What you're suggesting is extraordinary, and that requires extraordinary evidence. "It sounds better to me" is not evidence no matter how many times or how loudly you say it.
As I said, I'll keep an open mind, but I'm not going to buy into the idea without evidence, especially when conventional methods are well proven and work spectacularly well for me.
a lot can get lost when we think too analytically. in theory,
a lot can get lost when we think too analytically. in theory, digital audio captured (A/D) once and rendered (D/A) once should be the best/ purest representation. but digital is not perfect in itself. we've talked about this many times. analog is a perfect representation but it is plagued with artifacts. head bump, noise, wow and flutter, modulation noise. digital is touted as a perfect representation but the A/D - D/A process introduces artifacts. digital filtering at the top is used to prevent noise, stair stepping effect which truncates tiny portions of audio with each bit sampled and the fact that there was a mistake in the math in the white paper when 16/44.1 was specified by international engineering institutions ....... so with both analog or digital, it's close but not perfect in any sense. so in my pov, the idea that we need to keep conversions to a minimum is moot or antiquated at best..
leaving behind the idea that digital is perfect, purer or even better sounding, we should focus on what sounds good. i think summing outside of the box and the recording to a different capture for the 2 mix sounds better than doing it all itb. i wish itb was better .... it sure would be more convenient not to mention more affordable to put a system together.
last, i see no reason why anyone would record @ 48 when 96 is available. 192 might be overkill but 96 absolutely has it's advantages.
This is a recuring discussion these days. The question I've alwa
This is a recuring discussion these days. The question I've always asked is how different live processing and offline processing is for a DAW ?
Maybe rendering (exporting) to a file is actually bad because it's made faster ?? I'd like to challenge a DAW company about this ;)
DonnyThompson, post: 429356, member: 46114 wrote: Hmmm....I don'
DonnyThompson, post: 429356, member: 46114 wrote: Hmmm....I don't know... I mean, I understand that that's what's supposed to happen...
But I can remember back when I was using Sonar PE 8, that the "rendered" mixes always sounded slightly different to me than the original project file as it played in real time within the timeline... meaning, as I was mixing and listening within the project file, I could never seem to get Sonar to render it as sounding exactly the same as what I was hearing while mixing .
I remember PT users saying the same thing. It got me worried about Sony Vegas, but after listening carefully I couldn't say there was any difference between the preview and the rendered file. Maybe Sonar did that then, maybe Pro Tools did it, but I never heard it in Vegas. I've already asked if anyone had any evidence that there's any difference between the stream and the file and have gotten nothing so far. I hear no difference so without evidence to the contrary my conclusion is that there is no difference.
bouldersound, post: 429363, member: 38959 wrote: I remember PT u
bouldersound, post: 429363, member: 38959 wrote: I remember PT users saying the same thing. It got me worried about Sony Vegas, but after listening carefully I couldn't say there was any difference between the preview and the rendered file. Maybe Sonar did that then, maybe Pro Tools did it, but I never heard it in Vegas. I've already asked if anyone had any evidence that there's any difference between the stream and the file and have gotten nothing so far. I hear no difference so without evidence to the contrary my conclusion is that there is no difference.
you're asking for empirical evidence. on paper, how does it compare? but the listening experience is anecdotal, subjective and different with each subject.
there are many time where i saw specs that indicated something would sound good but when i heard it it sounded like doodie. ya' got use yer ears kids!
audiokid, post: 429290, member: 1 wrote: I should confirm, I don
audiokid, post: 429290, member: 1 wrote: I should confirm, I don't always track t 44.1 , but I definitely appreciate the sound of better converters "44.1 conversion" to others at lower SR. As an example, Lavry Blacks or Prism (two products I am familiar with and use) sound better @ 44.1 to an older RME FF800 at 44.1. So, I usually use these products tracking acoustic work on a laptop that I know runs better at lower SR. I use less cpu and HD space at 44.1. I trust my remote system better at 44.1 and love the sound of my conversion rate using good converters.
When at 96k, both are not so much noticeable. So, I choose Lavry or Prism to record sessions I am doing at 44.1. Better converters appear to sound better at lower SR.
As Bos has pointed out many times, better converters have better circuitry. Maybe he will chime in on this.
There are a lot of factors in play here. It's to be expected that higher-end (and therefore higher-price) converters will sound better than low or medium-end converters whatever the sampling rate, because the design, components and production qualities are usually better. I've told the story several times of a contract audio interface design I spent the best part of a thousand hours designing, testing and getting it to sound the way I wanted it, only for the bean-counters at the production end of things to substitute cheaper components, circuit board and power supplies. As a result, the commercial versions sounded nothing like what I had painstakingly designed, but the irony was that about the only thing they did not change was the A-D converter chip. I disowned the final product and did not allow my name to be associated with it. It really went to show that it's not just the A-D chip but everything in the design that contributes to the sound.
Back to tracking rates...[[url=http://[/URL]="http://recording.or…"]Here[/]="http://recording.or…"]Here[/]'s an RO thread about tracking and mixing using high sampling rates and only converting to the target rate (usually 44.1 if going to CD) when capturing the 2-bus mix. Note that it's all at 24-bit. The ME does the final dither to 16-bit after all the level adjustment, which is fine, but, like Chris, I rarely trust an ME to down-sample.
My theory about many well-known studios and recording engineers opting to track at the destination rate rather than higher is that they have tried higher rates with a subsequent sample-rate converter (SRC) and find the result is no better than tracking at the destination rate. This could well be because the shortcomings in the software SRCs mask the improvements gained by tracking and mixing at a higher rate. Use of the two-box mix process by-passes the SRC and so maintains the advantage of tracking and mixing at the higher rates. In this method, the top octave is above the target frequency range, and so mix problems like high-frequency phase swirling are removed by the anti-aliasing filters in the 2-bus target-rate capture process.
Kurt Foster, post: 429365, member: 7836 wrote: you're asking for
Kurt Foster, post: 429365, member: 7836 wrote: you're asking for empirical evidence. on paper, how does it compare? but the listening experience is anecdotal, subjective and different with each subject.
there are many time where i saw specs that indicated something would sound good but when i heard it it sounded like doodie. ya' got use yer ears kids!
I'm not asking for empirical evidence that it sounds better, I'm asking for empirical evidence that it's different at all, and which is objectively less degraded. Once that is provided I'll be more open to spending time listening.
If someone came on the forum and claimed that sacrificing a chicken and splattering the blood all over your gear made it sound better, would you try it right away or wait for scientific support for the claim? Real time "decoupled" rendering would substantially impede my process and I've seen not one bit of evidence supporting it, so I'll decline to try it for now.
I recorded at 24/44 for years and it's only 2 years ago that I s
I recorded at 24/44 for years and it's only 2 years ago that I started to test 24/96. The last year I used 24/96 for every projects and I'm very happy with the results.
I think that recording, mixing and mastering at 96 helps retain the quality of audio and allow all the processing to be done at higher resolution.
Is my mixes sound better because I'm getting better or because of the 96khz? Things for sure, I'm not taking any chances ;)