From researching the subject, mostly from information on this site, I had the notion that DAW latency was determined by the effiiency of the software drivers of the hardware. Meaning latency was due to hardware limitations, and software had little if any effect on latency.
then I read this:
javascript:OpenUrl(http://mixonline.co…')
In particular:
"Let's look at native systems first. With the increased CPU power available today, mixing a project entirely with native pluginshas come closer to being a reality. But soft synths, video playback, and applications such as Reason and GigaStudio all compete for the same processor cycles that pluginsdo. As more demands are placed on the system, even the fastest CPU will eventually buckle.
NATIVE SYSTEMS AND LATENCY
The biggest drawback to host-based systems is latency. Software companies seem to have conveniently missed this point. Every manufacturer I speak with tells me that latency is not a big issue, yet it is the Number One concern of musicians and producers. Even the lowest buffer settings yield 1.5ms to 3ms delays and are more of a fantasy than a reality when it comes to large files. The song that starts out with 16 tracks and a somewhat livable 3ms delay setting will die an ugly death when it reaches 64 tracks of audio, 128 pluginsand 16 virtual instruments. In order to record a session with this file, all of the tracks would have to be bounced to disk. This is not the DAW dream. Although many interfaces feature “no latency” inputs, these are generally limited to a stereo pair, with the live inputs combined with the stereo mix from the sequencer and fed directly to the interface's output. When combined with an outboard mixer and reverbs, this setup can salvage a small overdub session, but this is definitely not an alternative for pro applications."
Now it sounds as though the author is saying Pro Tools has a leg up on the competition in the area of latency. He goes on to say that hardware that claims to deliver zero latency monitoring is limited to a stereo pair. Which I thought about, and wasn't concerned with because on my system what I monitor is a stereo pair coming out of Sonar. I imagine he is saying Pro Tools has the advantage if you were mixing externally? Or maybe if you have a few different mixes going to different places for monitoring?
Anyway, I was just wondering if I am reading this right, and what you guys think on the subject.
Comments
Originally posted by tundrkys: One more thing, I want to see i
Originally posted by tundrkys:
One more thing, I want to see if I am reading this right. If he is saying ProTools gives the user an advantage due to it having the extra processing power to run native plug ins and effects as well as having dedicated Hardware, How would a system of two computers, one running VSTi and such and the other dedicated to audio compare? I am thinking I could build ten system for the cost of one HD system. But then I'd have the problem of synching them all together. Oh well, just a thought.
Latency is an over-rated problem.
- Computers and soundcards these days are plenty powerful to give you usable latency during production.
- The only time you'll run into the problem is when you are recording the midi. By the time you are piling on plugins during the mixing part of the production, you'll have most the stuff you need to be 'live' taken care of.
- Software these days has tools to allow you to 'freeze' processing - ie render to disk in a convenient manner a VSTi track, and regular audio tracks too (on Logic audio). Then, they track is no longer using the power it was, and you can bring down the latency.
- Yes, you can buy multiple computers and chain them together with sample accurate synch. It's called systemlink and it's a feature of Cubase. They even sell a cheaper 'addon' program so that just runs VSTi and VST effects, so that you don't need to buy the full blown sequencer for each pc.
I agree. I normally record vocals and midi at 512 samples buffer
I agree. I normally record vocals and midi at 512 samples buffer latency, and mix at 1024.
At 512, the vocalist hears a noticable delay, but i find it helps him or her to hear him/herself, rather than hearing an instant signal, exactly in sync with the resonations in his head caused by him generating his voice. Or maybe I'm just nuts. But when I try to record at 1024, they complain of echo. so it works for me.
Once you're done recording, and you've lined up all out-of-sync audio (by removing the empty 10ms at the beginning of the file), latency is no longer a concern. It just manifests itself as delay between htting the play button and the song starting to play.
mitz
Originally posted by mitzelplik: I agree. I normally record vo
Originally posted by mitzelplik:
I agree. I normally record vocals and midi at 512 samples buffer latency, and mix at 1024.At 512, the vocalist hears a noticable delay, but i find it helps him or her to hear him/herself, rather than hearing an instant signal, exactly in sync with the resonations in his head caused by him generating his voice. Or maybe I'm just nuts. But when I try to record at 1024, they complain of echo. so it works for me.
Once you're done recording, and you've lined up all out-of-sync audio (by removing the empty 10ms at the beginning of the file), latency is no longer a concern. It just manifests itself as delay between htting the play button and the song starting to play.
mitz
A lot of my music is pretty cpu intensive, and we are usually working on the song while we record it - muting and soloing tracks to make it easier to record the vocals. This means that sub-3ms latency, and also rendering out the song first are both ruled out.
So, I found it's best if you don't send them audio that's been through the computer at all.
I picked up an outboard compressor to record vocals with, so now I have the Really Nice Preamp/Really Nice Compressor combo. I'm using only one channel for recording the vocals, and the send return on the preamp is split - one goes through the compressor and one is left dry, - evreything recorded at 24 bits.
I send the vocalist the compressed version, and don't monitor her at all through the pc. Voila, no echo and a nice compressed headphone mix, so she can hear herself over the music without turning up too loud. This also helps with the headphone bleed.
Mixing between the compressed and the clean channel usually gives me usable vocals too, so there's often no need to do additional processing. Though I've never let that stop me :>
Originally posted by pan: Originally posted by Jonathan El-Bi
Originally posted by pan:
Originally posted by Jonathan El-Bizri:
Latency is an over-rated problem.
I disagree, latency is a much underestimated problem in modern recording and mixing.
You always have to be aware of it.
n
Would you be more specific? How is it a problem if there are simple solutions to eliminate issue?
How is latency a problem during the mixing stage at all?
Originally posted by Jonathan El-Bizri: Originally posted by
Originally posted by Jonathan El-Bizri:
Originally posted by pan:
Originally posted by Jonathan El-Bizri:
Latency is an over-rated problem.
I disagree, latency is a much underestimated problem in modern recording and mixing.
You always have to be aware of it.
n
Would you be more specific? How is it a problem if there are simple solutions to eliminate issue?
How is latency a problem during the mixing stage at all? I'm on the side that says latency is a problem in that you have to be very aware of it and you have to make sure that you are doing what you can to make it not a problem.
If you are mixing analog and digital together on an analog board, large digital latencies will mean that the devices that feed the board via analog won't line up with the devices and/or digital tracks that come in from the digital side.
Agreed, but if you're totally digital, most sequencers at least
Agreed, but if you're totally digital, most sequencers at least partially do a lot of latency compensation fr you. And in the case you have to manually deal with it, its just a matter of shifting a few tracks either backward or forward in time.
Now jitter, OTOH, is a problem much harder to deal with.
Originally posted by mitzelplik: Agreed, but if you're totally
Originally posted by mitzelplik:
Agreed, but if you're totally digital, most sequencers at least partially do a lot of latency compensation fr you. And in the case you have to manually deal with it, its just a matter of shifting a few tracks either backward or forward in time.
Nuendo's latest update include a real time mixdown function. This mixes down all tracks and inputs with latency compensation, allowing you to 'render' through external analog gear.
I haven't tried it enough to work out how well it works, but you may want to check it out.
One more thing, I want to see if I am reading this right. If he
One more thing, I want to see if I am reading this right. If he is saying ProTools gives the user an advantage due to it having the extra processing power to run native plug ins and effects as well as having dedicated Hardware, How would a system of two computers, one running VSTi and such and the other dedicated to audio compare? I am thinking I could build ten system for the cost of one HD system. But then I'd have the problem of synching them all together. Oh well, just a thought.