Skip to main content

kmetal pcrecord et al

Ok,

I figured I'd start this, hopefully in the right place. At this point we have totally confused poor Jonathon and cast doubts on his choice :).

Anyhow, as I said, I am NOT an expert. I do know some things I have discovered through learning hands on, and off others.

Here is part of a discussion over on UAD Forums, for and against the comparison. Post 11 by slamthecrank covers the reality eloquently :).

https://uadforum.com/unrest-department/41754-dsp-versus-native-cpu-2.html

I will try and find a good explanation of using the virtual channels instead of monitoring direct. Suffice to say you can run things in and out, and I do, and even run a SUM system from the 8 outs and into 2 channels of the Unison Preamps giving you more or less a separate mastering system.

Console 2 and using thunderbolt with latest version even balances plugins across multiple DSPs if available. It's about dedicated and stable and near zero latency when it really counts :). That's my take FWIW.

Cheers,

Tony

Topic Tags

Comments

kmetal Thu, 09/19/2019 - 10:12

Cool link, it's gonna take me a bit to read it all. Im not expert either when it comes to EE and coding ect.

My biggest question is why VSTi's arent made (yet) for dsp? Or even a dedicated rack module that can run vsti? Digitally controlled analog is finally arriving from companies like McDsp, who have a rack unit full of analog components, controlled by a pluggin interface. Units like the bricasti, eleven rack, eventide h9000, are all external digital processes and i know 11, an h9000 are can be controlled with the computer.

Its this lack of vsti support that for me, makes current dsp only half of a solution. With an arguably more music being made on virtual instruments rather than "real" instruments acoustic or electronic.

Im guessing it has to do with processing power and coding..

Its crazy to me that a 30k protools hdx rig still would be in the same boat as a berringer and a laptop when it comes to vsti latency.

This is the main area where cpu outperforms dsp, if for no other reason than you can freeze tracks in native. While a 600$ 8core slave with 64gb of memory is very powerful, its still subject to the daw latency buffers, and interface drivers. So even there your possibly freezing tracks. My solution is a powerful main computer which is essentially a tape machine, handling edits and tracks and levels, but very little processing. It seems to me this is the best chance at keeping a low buffer thru the entire project for the slaves which handle effects, and vsti. To me its the closest it gets to an analog workflow. Albeit still requiring multiple computers and VEP... Id love a vsti pcie card instead!

@Makzimia can you add uad plugs to a vsti while tracking it? Im curious.

Maybe Boswell could chime in hear to highlight some of the differences or complexity involved with native vs dsp coding, and the differences between SHARC, ARM, and FPGA chips which seem to be the prevailing chips used.

This is an intriguing topic.

pcrecord Thu, 09/19/2019 - 10:51

kmetal, post: 462199, member: 37533 wrote: VSTi's arent made (yet) for dsp?

I virtual instruments running of a chip already exists in hardware keyboard.. ;)
But the need for large samples storage is the limitation. With Native instruments, one Piano sound can be more than 1gig of HD space.. Puting that in a chip might be hard to do.
Well if they start to build interfaces with large internal storage like SSDs, we may get there..
Just thinking outloud ;)

Boswell Thu, 09/19/2019 - 11:15

kmetal, post: 462199, member: 37533 wrote: Maybe Boswell could chime in hear to highlight some of the differences or complexity involved with native vs dsp coding, and the differences between SHARC, ARM, and FPGA chips which seem to be the prevailing chips used. This is an intriguing topic.

The big difference between DSP devices (e.g. SHARC and TMS320xx) and CPU chips (ARM, Intel) is that, broadly speaking, DSP devices run multiple processing in parallel, where CPUs run sequentially. That means that the number of tasks that a DSP can run is well determined, but the tasks take the same time. If you have more to do, you add extra DSPs. In contrast, a CPU can only run one process at a time, so in a given time will run a certain number of processes. If you can be flexible about how long it takes, you can run more processes.

Most modern computer chips (Intel, AMD etc) have multiple CPU cores inside that can run at the same time (concurrently), but not necessarily in 1/Nth of the time for one core, as they usually have to queue for common resources such as memory and I/O.

The interesting case is the FPGA, where a lot of my design time goes. These can be programmed to run parts of a task in parallel and parts as sequential processes (like CPUs). Larger FPGAs often have multiple dedicated DSP hardware on the chip that the programmable part of the FPGA can make use of, so they make very flexible solutions to the multiple-processing problem. However, getting them to work efficiently in this way can be a big task.

Tony Carpenter Thu, 09/19/2019 - 12:41

kmetal Kyle, you can send to a pair of virtual channels from the likes of kontakt and most good stand alone VSTis. With that correct routing into a return channel on the DAW with effects on the console fader input should work yes. With monitor outs or virtual outs and multiple physical outputs.

Boswell thank you!. I had read similar but I couldn’t explain it that well :).

pcrecord Marco, storage in SSDs is happening with Korg Kronos too. I love having my really nice keyboard in the Roland Fantom X8 or my Montage 7 and pulling up a multi in Kontakt.

kmetal Thu, 09/19/2019 - 15:25

pcrecord, post: 462200, member: 46460 wrote: I virtual instruments running of a chip already exists in hardware keyboard.. ;)
But the need for large samples storage is the limitation. With Native instruments, one Piano sound can be more than 1gig of HD space.. Puting that in a chip might be hard to do.
Well if they start to build interfaces with large internal storage like SSDs, we may get there..
Just thinking outloud ;)

Good point, hardware keyboards.

An interface with an external or interbal storage would be awsome for vsti! Ypu could even have a chassis with some ram in it to, maybe?? Then maybe the dsp chip could load the vsti gui and controls just like any other pluggin, leaving the drive and ram to do the lifting ROM used to do? From what ive read alot of sample players load samples into Ram, they dont stream from the drive generally, but a fast drive decreases load time.

Its crazy how little i know about this aspect of stuff i used daily. Coders and EE's are like the modern era craftsmen.

Boswell, post: 462201, member: 29034 wrote: The big difference between DSP devices (e.g. SHARC and TMS320xx) and CPU chips (ARM, Intel) is that, broadly speaking, DSP devices run multiple processing in parallel, where CPUs run sequentially. That means that the number of tasks that a DSP can run is well determined, but the tasks take the same time. If you have more to do, you add extra DSPs. In contrast, a CPU can only run one process at a time, so in a given time will run a certain number of processes. If you can be flexible about how long it takes, you can run more processes.

Most modern computer chips (Intel, AMD etc) have multiple CPU cores inside that can run at the same time (concurrently), but not necessarily in 1/Nth of the time for one core, as they usually have to queue for common resources such as memory and I/O.

The interesting case is the FPGA, where a lot of my design time goes. These can be programmed to run parts of a task in parallel and parts as sequential processes (like CPUs). Larger FPGAs often have multiple dedicated DSP hardware on the chip that the programmable part of the FPGA can make use of, so they make very flexible solutions to the multiple-processing problem. However, getting them to work efficiently in this way can be a big task.

Thanks for the clear insight, this stuff is fascinating!

Makzimia, post: 462202, member: 48344 wrote: kmetal Kyle, you can send to a pair of virtual channels from the likes of kontakt and most good stand alone VSTis. With that correct routing into a return channel on the DAW with effects on the console fader input should work yes. With monitor outs or virtual outs and multiple physical outputs.

Boswell thank you!. I had read similar but I couldn’t explain it that well :).

pcrecord Marco, storage in SSDs is happening with Korg Kronos too. I love having my really nice keyboard in the Roland Fantom X8 or my Montage 7 and pulling up a multi in Kontakt.

Sweet glad uad lets you do this with vsti. Another question for ya... Can you mix and match uad and native pluggins for inastsnce on a vocal track, or use like an amp sim? Im guessing the latency would be determined by tge session buffer, just curious if it can be done like you can with a vsti.

kmetal Thu, 09/19/2019 - 16:04

Another place i see DSP crashing the party is in studio monitoring. From barefoot to krk to dynaudio, i see alot of companies incorporating this. They seem to use it for room calibration, crossovers, driver alignment and design. I wonder if some are using it to improve grear designs or as a crutch. Maybe its making low cost stuff perform to a higher level than otherwise possible.

One negative is it does (or can) add latency because the dsp is doing adda. I read this in sos. The other thing that makes me curious is what effect on the audio the conversion is having if the speaker dsp sample rate is say 96k, and your working at a different rate. Would you be hearing accurately if the speaker is up-sampling or down-sampling whats coming out of the daw/interface?

kmetal Thu, 09/19/2019 - 17:16

bouldersound, post: 462208, member: 38959 wrote: I'm not sure what you mean by DSP doing AD/DA, unless you're talking about digital speaker processing for live sound. I'm pretty sure the UAD type systems run at project settings and don't add any additional conversions.

As far as i know you are correct about UAD.

I was reffering specifically to speakers that employ internal dsp for their crossovers. If i understand it there is an internal adda conversion that happens here. I could try to dig up the sos article mentioning that the dsp in the speaker added a few ms latency due to its adda. This would be a speaker without digital connection, no aes ect.

I know dynaudio has a newish speaker out that uses internal dsp running at 96k. I am curious how or if this effects the audio running thru it.

My lack of understanding is probably confusing things.

kmetal Thu, 09/19/2019 - 19:11

bouldersound, post: 462208, member: 38959 wrote: I'm pretty sure the UAD type systems run at project settings and don't add any additional conversions.

kmetal, post: 462210, member: 37533 wrote: As far as i know you are correct about UAD.

Well, after reading the rest of the thread that Tony linked, it was stated by someone, (flippy floppy) "most of the ua pluggins are upsampled to 192 and running in realtime".

If this is true it could be part of why the latency is so low?

Its also interesting to me, just as in the speaker dsp, any time there is SRC going on in the path. Eric Valentine has been putting out great vids lately. One of them has a part wher he re-converted audio 20x thru his adda (lynx i believe) and compared it with the original. This did not include src. I could not hear a difference on my phone, which im pretty used to and have guessed well on pre amp shootouts ect.

Im not here claiming the SRC is bad or even noticeable, rather curious why it takes place. Some SRC is better than others, as boz and chris have demonstrated with a seperate capture system, which may or may not include summing. I think chris has used it just for SRC as well as alot of MEs using decoupled capture systems.

Another interesting thing from the thread about uad vs native, is uad plug instances use 100% of their allotted dsp when instansiated, native pluggins do not, at least in the case of brainworx which the poster was referring to. The native increases in cpu usage as each control is used, so a flat fresh instance uses much less cpu than i fully tweaked one.

Taking these posters at their word, these "facts" are interesting and show more ways that dsp and native differ, or can differ with certain hardware and software.

kmetal Thu, 09/19/2019 - 19:16

bouldersound, post: 462211, member: 38959 wrote: That would make sense. High sample rates have low latency, if that helps.

Right. My concern is what am i hearing when my session is 192 or 44.1 and the speaker dsp is up/down converting. Do i even hear my audio at 192k? I understand 192k is abnormally high at this point in time as far as regular day to day work, but when trying to decipher say the differences between two converters in a mastering or archive situation, the additional speaker dsp conversion could mess with things possibly?

Again for practical use it might be a non issue, but im interested im the technical and theoretical too.

bouldersound Thu, 09/19/2019 - 19:53

As I understand it, all modern plugins upsample, so it's no surprise that UAD plugins also do it. I think the main reason hosted plugins add less latency to input monitoring is that they avoid the round trip to the computer, and the DSP is optimized for them.

As for speaker DSP, if it's connected by analog it's getting converted, though not necessarily in the same way as a purely digital SRC. If it's a digital connection I'd have to look at the specs of the speaker in question.

That's interesting about hosted plugins using 100% of their allotted processing, but it may just be that 100% of what they might use is reserved for them while with native plugins the processing is allotted in smaller increments as needed.

KurtFoster Thu, 09/19/2019 - 19:59

bouldersound, post: 462215, member: 38959 wrote: As I understand it, all modern plugins upsample, so it's no surprise that UAD plugins also do it. I think the main reason hosted plugins add less latency to input monitoring is that they avoid the round trip to the computer, and the DSP is optimized for them.

Boswell, post: 462201, member: 29034 wrote: The big difference between DSP devices (e.g. SHARC and TMS320xx) and CPU chips (ARM, Intel) is that, broadly speaking, DSP devices run multiple processing in parallel, where CPUs run sequentially. That means that the number of tasks that a DSP can run is well determined, but the tasks take the same time. If you have more to do, you add extra DSPs. In contrast, a CPU can only run one process at a time, so in a given time will run a certain number of processes. If you can be flexible about how long it takes, you can run more processes

re read Boswell's post .....

Tony Carpenter Fri, 09/20/2019 - 00:43

kmetal Hi Kyle,

A lot of what your asking has been answered :). You just need to realise it’s all about the choices you want and can make. There will be trade offs as you go outside the UAD environment of course.

My approach is to always use as close as possible to an analog chain during any recording. Anything else becomes post recording, production choices.

I’ve used and still do ( Roland Fantom has 256mb ram for samples) sampler keyboards, and owned a full loaded Akai S6000 back in the day. I have a NI Macshine. But honestly, I’m just a kontakt user mostly on top of my live keyboards.

With the advent of huge libraries and articulation control etc the only limits are your imagination:).

Tony

kmetal Fri, 09/20/2019 - 16:18

Intersting stuff everyone.

pcrecord, post: 462221, member: 46460 wrote: You know what I didn't remember but there is some hardware host for vsti.
A quick search and I found this one : http://www.smproaudio.com/index.php/en/products/v-machines/v-machine

This is super cool. Its a great idea for anyone using dsp based systems with vsti, but doesnt want to build a slave pc.