Skip to main content

Hey,
If I'm running a session with a 64 sample buffer, then i instantiate a plugin with a 64 sample latency spec on my live input, does that mean that the latency i experience would be based on 128 samples? Or does the plugin latency get "absorbed" by the session buffer and i would only experience the 64 samples of latency?

Thanks.

Topic Tags

Comments

Boswell Mon, 08/10/2020 - 00:42

It depends on the DAW. Most have latency compensation on live inputs in terms of lining up with previously-recorded material, but not all can accommodate arbitrary amounts of additional plug-in delays on the live route.

Are you talking about temporal alignment with recorded tracks, or are you referring to monitor outputs?

kmetal Mon, 08/10/2020 - 11:12

bouldersound, post: 465173, member: 38959 wrote: I would bet it gets added.

This is what i always thought, but i never confirmed it was true.

Boswell, post: 465175, member: 29034 wrote:
Are you talking about temporal alignment with recorded tracks, or are you referring to monitor outputs?

I was reffering to latency on the monitoring outputs. Im trying to grasp how RTL is effected by pluggins, and how it all adds up. im also unsure what ultimately determines the RTL when monitoring with effects, the interface driver? Or do pluggins add a fixed number of milliseconds. Since different interfaces offer different latency specs at the same daw buffer, im not sure if all interfaces would incur the same latency for a particular pluggin or not.

Im also unsure how a daw adds up the pluggins. If i have 2 channels with 3 pluggins each, is the latency/buffer the sum of all 6 instances? Or just whatever channel has the largest buffer? And how does a parallel processing chain get tallied up?

The goal is to optimize latency for native realtime processesing as much as possible, on live input tracks. I'm just unclear how the computer arrives at the total RTL time with reatime effects, and various routing schemes.

Then i have to determine whats better, to instance the live pluggins on the channel on the main pc, or route them thru a VEP bus to/from the slave pc.

I *think* ive thought of a workaround for master bus processing latency by making two "sub masters" feeding the master bus on the main pc. One bus with no effects for the live tracks, one with the rest of the recorded tracks feeding it and my typical master bus chain instantiated. This way the latency penalty would be only the pluggins applied to the live channel itself, while the rest of it would (hopefully) be handled by the delay compensatation.

Boswell Mon, 08/10/2020 - 13:23

Different DAWs have their own ways of dealing with live monitoring. With the possible exception of Ableton, using a DAW with live effects is not usually recommended, principally because there is no easy design balance between making it bearable for live monitoring and yet comprehensive in time alignment if there is no live return route.

Annoyingly, I've grown over time to become increasingly sensitive to latency in monitor feeds, and it's one of the reasons I still run a large array of hardware processing boxes to keep the delay to a very few milliseconds. With stage performances, I knew that I could play happily 8 - 10 feet from a direct monitor speaker, so what's the problem with 8 - 10 milliseconds of latency when wearing headphones? It's a question that I've never heard a convincing answer to, possibly because it involves several different disciplines.

kmetal Mon, 08/10/2020 - 13:46

Boswell, post: 465190, member: 29034 wrote: Different DAWs have their own ways of dealing with live monitoring. With the possible exception of Ableton, using a DAW with live effects is not usually recommended, principally because there is no easy design balance between making it bearable for live monitoring and yet comprehensive in time alignment if there is no live return route.

Annoyingly, I've grown over time to become increasingly sensitive to latency in monitor feeds, and it's one of the reasons I still run a large array of hardware processing boxes to keep the delay to a very few milliseconds. With stage performances, I knew that I could play happily 8 - 10 feet from a direct monitor speaker, so what's the problem with 8 - 10 milliseconds of latency when wearing headphones? It's a question that I've never heard a convincing answer to, possibly because it involves several different disciplines.

I'll have to inquire with magix how it works. For now i will plan on using 0 latency pluggins whenever possible i guess. My eleven rack allows sending a dry and processed set of signals so that's an option too. There are certain effects like vocalsynth that are itegral to the sound, so in those cases ill have to tolerate latency.

Im really trying to avoid reliance on dedicated dsp hardware due to cost and obsolecence.

Im fortunately not overly sensitive to latency, and thankfully vsti is only DA latency. Ill have to experiment with different routings and see what minimizes latency. I should also test the various daws i have to see differences.

I dislike headphones so any latency i incur is in addition to the time it takes between speakers and ears. Im wondering now that you bring it up if maybe using speakers has effected my sensitivity (or lack of sensitivity) to latency.

Do you know what makes ableton unique as far as being decent for realtime effect monitoring?