1. Dear Guest, if you haven't already... enter to WIN Samplitude Pro X4!
    Dismiss Notice

Roland vs. 1880 - transferring individual tracks onto other machine to be mixed and mastered m

Discussion in 'Digital Recorders' started by Jonathan Larkin, Sep 9, 2019.

  1. Makzimia

    Makzimia The Minstrel Well-Known Member

    Joined:
    Aug 20, 2014
    Location:
    Hyde, England
    Marco,

    You're confusing him more! LOL. Ok, let me explain something. DSP based plugins which are becoming more and more common, either physically on a smaller scale in a premade chip, or like UAD processors which are used depending on the need of the physical modelling in said plugin at the time. Yes you can record with the UAD device without a plugin, but why would you when it comes with a number of them free, AND, they are superior to a lot of standard plugins. They also work as you're recording through the console with near zero latency. A Virtual instrument still has the ability to outdo a dedicated keyboard sample or modeler (sound sample size). A DSP based chipset for plugins is a different setup entirely. There are no shortcuts for audio processing they charge for them. This will get complex very fast for a learner, and is best in it's own topic IMHO :)>.

    Example, preamp or channel strip in unison on input mic or eq and compressor say. These can also then be fed to an aux channel and the person recording can hear a reverb or delay or whatever without recording those though if they don't want to. Modern CPUs can handle a lot of grunt, BUT, a dedicated modelled plugin can run better outside the use of CPU. There was huge discussion about this over on the UAD forum at least once in recent years. I know I can think of a number of plugins that are native as well as modelled for the UAD devices. I am not an expert by ANY stretch of the imagination.. HOWEVER!! personal experience with various DAWs and powerful PC and or MAC tells me the UAD cards and the UAD Apollos do their job VERY well.

    Regards,

    Tony
     
  2. pcrecord

    pcrecord Quality recording seeker ! Distinguished Member

    Joined:
    Feb 21, 2013
    Location:
    Quebec, Canada
    Home Page:
    Hey Makzimia,
    I first wrote this message because I felt we are pushing the OP to spend more money that he may need to.
    I also wanted to open the discussion to the + and - of DSPs because frankly I know next to nothing about them.
    So I'm glad you stepped in to help me understand better.

    I'm getting there thanks.

    - So If I'm correct the positive sides is you can make a realtime mix for headphone with effects
    This must be time consuming if you have a band but their recording experience would be a lot better.. Nice thing..
    On my side I always use a none DSP realtime mixer (mixwizard from focusrite and now Totalmix from RME) except from rederecting a hardware reverb to the headphones, musicians only hear the raw preamps, volume and pans.

    - What about using the DSPs while mixing ? I mean recording raw and doing roundtrip to them to mix.
    Can this be done. How hard is it on the computer if you do so ? Don't we fear some degradation or there is none because it stays in the digital domain ?

    You are right this could go on its own thread.. sorry..
     
  3. kmetal

    kmetal Kyle P. Gushue Distinguished Member

    Joined:
    Jul 21, 2009
    Location:
    Boston, Massachusetts
    Home Page:
    You can use dsp effects during mixing and tracking. One advantage to dsp effects is that they dont add any process ing load to your computer like a (native) effect would in garage band.

    The other is low latency when tracking. Latency is a delay that happens while audio is going in and out of your computer, when tracking. If you are tracking a vocal and want to add an effect you can, and it will make it so there is a slight delay between the vocal your singing, and the vocal you hear in your headphones. When this delay is small enough you will not notice it. Both dsp and the (native) pluggins in the daw can do effects with the delay small enough to be negligible or "realtime" as its referred to.

    What happens is as your session grows with more tracks and mix pluggins, your computer and/or dsp works harder and harder, and eventually glitches start to happen or your dsp maxes out. To prevent glitches we adjust a setting in the daw called a buffer. This eases the amount of work the computer is doing, but it increases latency when your tracking. When the latency/delay is long enough it becomes distracting and it sounds in your headphones like an echo.

    There are workarounds to compensate for this in native (non dsp systems), this is where dsp really becomes a useful feature. Since a dsp effect doesn't add any burden to your computer you dont get any increase in latency, no matter how many effects you are running in garage band. The caveat is that dsp can be maxed out when you use enough effects in the mix and/or during tracking. So even tho there is no delay youll hear, there is a limit to how many effects can be used total.

    Where dsp shines is when your knee deep into a project and want to add a new part like a backup vocal, or guitar solo, or maybe fix a flubbed note in your lead vocal. Dsp will let you track these new parts with realtime effects that you can hear in your headphones and even record along with a track. Like vocal reverb or an echo on your guitar. No matter hiw hard your computer is working on the tracks youve recorded already, which probably have a bunch of edits and mix pluggins, your dsp lets you record with live, realtime effects, and no troublesome latency. This is a key use for dsp, because otherwise you would have to record dry, with no effects, or you'd experience an annoying delay in your headphones.

    Modern computers can handle alot of tracks and mix pluggins before latency becomes an issue, and on smaller mixes may never be an issue. If you have an imac with an intel i5 an 16gb of ram, you can do at least a dozen tracks and a dozen moderate effects before latency would ever start to be a concern.

    A big trade off with dsp on UA stuff is you can't record while hearing both dsp effects and garage band effects at once, on the track your recording. You can only use your dsp or the garage band native effects in "realtime". So if garage band has a reverb you love and apollo has a preamp emulation you love, you cant use both at once on your live track. You can mux and match in the mix at any time, just not on the track your recording live.

    -to the obsessed- ua has a fixed 2.2ms latency for their realtime effects monitored thru their console app. Some pluggins do add more latency. Their 2.2ms fixed latency is lower than the 4ms (or more) RTL youd get when monitoring thru the daw. This is UA being lazy with their drivers, and creating a key selling point for their dsp, since if you want maximum performance you have to use the console app and input monitoring on the daw side.

    If i understand it correctly, you can run the dsp and native pluggins at once, monitoring via the daw, but the dsp is subject to the daw buffers and latency. Maybe @Makzimia can verify this for me.

    This is a real bummer and why pthdx systems really are the most well integrated with regard to realtime processing and ease of use, as well as having the lowest latency of any system.
     
    pcrecord likes this.
  4. kmetal

    kmetal Kyle P. Gushue Distinguished Member

    Joined:
    Jul 21, 2009
    Location:
    Boston, Massachusetts
    Home Page:
    Lol this probably deserves its own thread. But.. Ill at least try to be breif.

    Its worth noting that if you use Vsti's none of them run on dsp, and are subject to the daw buffer setting. Since its only a DA conversion, latency is half of what the RTL would be when monitoring a live input.

    It is also worth noting that UA's sharc dsp chips are so mature, they boderline on old, and are the same basic chip they have been using since around 2003. Many new dsp like antelope for instance uses ARM chips, which are noted to be easier to program and more modern (powerful?). I suspect part of UAs use of 'old' chips is to avoid pissing users off by making their current ua pluggins obsolete, and to save on R&D, and manufacturing costs by using old parts and not having to recode their pluggins. Tho everyone is rarely happy, i have read people sneering at the new Apollo X hexacore chips, as sorta shabby, even tho they increase performance by 50% compared to the older chips. I would suspect at some point ua will have to move on from sharc chips..

    I would say that UA is as hit or miss as any other pluggin maker. At least imho not having heard every single pluggin. The UA massive passive sounds sweeeeet.

    One reason to not use a UA pluggin during tracking is to not have to fumble around with the console then daw mixer for the same effect. Ie just instantiate it in the daw mixer and be done.

    Are you able to add UA pluggins to a vsti during tracking? Can i add a compressor to a virtual piano for instance?

    Not hating UA, just exploring pros and cons.

    I think this really depends on the system. An intel/ryzen 8 core cpu can run hundreds of native pluggins and/or thousands of voices of vsti at 64 and 128 buffer settings. Nevermind a master/slave setup.

    From what ive gathered on the forums native performance has surpassed dsp performance for most common use cases and newer computers.

    To me its saving grace is last minute overdubs where amp sims or effects are critical. Seems simpler than freezing a bunch of tracks and fiddling with buffers. Unless you can group freeze, so just like "freeze all" eccepr the live inputs, in which case you'd most often be ok.

    I do hate the idea of having a mix full of dsp and enough power to run my favorite mic pre emulation. Especially if im punching in on a track way after the fact.

    You can save the settings at least, so its as easy as any other template. You can also print the effects too. Not sure if you can route it so you print both the dry and processed sound.

    What i like vs using an external effects unit is you can save the effects settings within the daw and modify. I always hated the cue mix in apogee because you had to instantiate a pluggin in the daw, so the cue and daw had completely different effects chains on them. Its annoying when you cant use the cue effects within the daw. Ua and antelope allow this, but not all interfaces dsp does. Nevermind the volume spikes when you have a compressor on the track, amd swithch into input monitoring to use the dsp. Soo annoying. Singer comes in, you start tweaking, he decides to overdub, now your signals gave different effects and gain structure upon tracking and listen back. Like i said earlier this is where PTHDx shines with its super tight integration. Its as close to mixing/tracking on real console as it gets in a daw right now.

    This can be done. If i understand correctly the only latency is due to the pluggin itself not the trip thru the dsp, since like you said it stays in the digital domain. The daws delay compensation handles the latency just like it would fir a native pluggin.

    However i do not know that you freeze tracks with dsp to save dsp. I *think* you have to print the track if you run low on dsp. A "dump to CPU" type command would be nice to let you move the dsp effect to your RAM/cpu in deep pluggin heavy mixing. Lol then they could sell you addition 'UA Native' versions of all the pluggins you already own, ala' digidesign....

    So much for brevity... Whos starting the new thread!?
     
    pcrecord likes this.

Share This Page

  1. This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
    By continuing to use this site, you are consenting to our use of cookies.
    Dismiss Notice