Skip to main content

Looks like antelope is acknowledging the uses of digital trim. Their doing it thru a reworking of thier dsp chip, as opposed to I'm guessing a hardware relay or something on some other designs? Also something that struck me was the guy being interviewed saying digital trim gets a bad rap becuase of headroom issues. This is the first time I've heard this, anyone else heard anything about that?

4:18 in the vid is where their taking about the Orion.

Comments

kmetal Tue, 04/18/2017 - 14:05

Boswell, post: 449607, member: 29034 wrote: Yeah, I think he's talking about relatively small-scale gain trims to balance up a channel against the others, rather than the 50 - 60dB of gain adjustment that is fitted in microphone pre-amps.

Ok cool.

audiokid, post: 449608, member: 1 wrote: The Goliath looks really cool. These guys are always cutting edge. I sense Antelope Audio is going in the direction of the Apollo.
Hopefully they allow hard bypass to everything.

I had emailed them months back about the Orion they said the dsp is taken out of the signal path completely when bypasssed.

I like the idea of a slave computer or dedicated dsp card or box, with conversion separate personally, but the amp sim idea is cool for jamming/recording in tight spaces.

bouldersound Tue, 04/18/2017 - 14:15

kmetal, post: 449609, member: 37533 wrote: I like the idea of a slave computer or dedicated dsp card or box, with conversion separate personally, but the amp sim idea is cool for jamming/recording in tight spaces.

One of the reasons it's nice to have it in your interface is that you can use it (non-destructively) on live inputs. If the client wants reverb on his voice while tracking, no problem, and very low latency.

audiokid Tue, 04/18/2017 - 14:18

kmetal, post: 449609, member: 37533 wrote: I had emailed them months back about the Orion they said the dsp is taken out of the signal path completely when bypasssed.

right on

bouldersound, post: 449610, member: 38959 wrote: If the client wants reverb on his voice while tracking, no problem, and very low latency.

That is a huge plus for me.

kmetal Wed, 04/19/2017 - 16:08

bouldersound, post: 449610, member: 38959 wrote: One of the reasons it's nice to have it in your interface is that you can use it (non-destructively) on live inputs. If the client wants reverb on his voice while tracking, no problem, and very low latency.

I think for last minute overdubs where the session buffers are high it's relevant to have.

I still would way rather just have it on a dsp card, or dsp device, independent of the interface, even for last minute fixes. I want to be able to give the artist a good reverb, and one that's also able to be used in the session itself. its more convenient and time efficient. Then there's also avoiding the 'what happened to my voice' when you take the (perhaps questionable) cue verb off, and replace with something else.

Not exactly sure how the antelope system works but apogee and motu had the built in dsp and then the daw ran in 0 latency mode, so any pluggins on the track you may have had are disabled, and it screws up the mix levels.

If you can/could open up antelopes processors within the daw I'd be more of a fan of built in dsp for interfaces.

kmetal Thu, 04/20/2017 - 14:15

bouldersound, post: 449653, member: 38959 wrote: I have some experience with the Apollo 16. The plugins are the same whether used for tracking or in the session. It's got enough processing power to run a ton of them without breaking a sweat.

I've spent a couple hours with my cousins original Apollo when I installed it, so I'm familiar but not fluent with them.

Doesn't it require you to run your daw in 0 latency mode and to use their monitoring mixer for monitoring the effects on the UA mixer window, then you have a separate instance open in the daw for mixing? So while it's an improvement to be able to use the same plugs and presets for mix/monitoring, it's still a hindrance relative to a native system or a card/USB type dsp processor. That is if there's not an easier way to do it.

Basically I guess I just don't like the idea of dsp unless it just opens up and saves within the daw session. again I'm not super experienced w the Apollo or antelope stuff.

bouldersound Thu, 04/20/2017 - 15:33

You're going to be using low latency mode for tracking regardless. I'm pretty sure the Apollo can run its plugins on live inputs just fine in low latency mode, though probably a separate instance via the Console (UAD's input monitoring DSP). Who cares? Reverb in headphones is just a convenience, to replace the natural space you lose by covering your ears. Our brains are used to hearing reflections from the room we're in, so replacing it artificially can be beneficial. Giving a general impression of a produced sound is sometimes nice, but I don't feel the need to replicate a finished mix during tracking. Most of the time I don't use any effects on live inputs unless there seems to be a particular need.

kmetal Fri, 04/21/2017 - 12:35

bouldersound, post: 449665, member: 38959 wrote: You're going to be using low latency mode for tracking regardless. I'm pretty sure the Apollo can run its plugins on live inputs just fine in low latency mode, though probably a separate instance via the Console (UAD's input monitoring DSP). Who cares? Reverb in headphones is just a convenience, to replace the natural space you lose by covering your ears. Our brains are used to hearing reflections from the room we're in, so replacing it artificially can be beneficial. Giving a general impression of a produced sound is sometimes nice, but I don't feel the need to replicate a finished mix during tracking. Most of the time I don't use any effects on live inputs unless there seems to be a particular need.

What was fun about the old mackie D8B setup down at one of the studios was its built in dsp cards on the mixer. the mixes were done 24 I/o thru the mixer to the daw/hard disk recorder. The daw/HDR was basically a digital tape machine.

Not only was it almost as fast as analog, the settings were recallable instantly, and the dsp had no audible latency or degradation. So what happens is while your tracking your doing some work on the tracks, comfort verbs, hpf, stuff like that, you end up w a decent intiuitive rough mix at the end of the day, since your knocking little things out along the way.

having that first mix/day recallable was a few times when the rough mix was better than the mix, and it was great to be able to revisit it with two clicks.

what your artist is hearing in the phones is what their reacting to on a performance level, so it's important to have something inspiring, and it's helpful to be able to re visit it quickly for reference if effects and settings have evolved since tracking. I want to preserve as much of day 1 as possible. If for no other reason just to hear it one day.

In the case of the D8B it had its own CPU / Psu unit so the settings were saved to that not the daw.

So when the board broke, there went the mixes along with them. I then spent months trying to replicate a full projects worth of mixes that were just about done, in DP.

That's probably why I don't like effects that aren't able to be brought up in the session itself. It's one thing to swap in a new pcie card or TB device, it's another when your processing is linked to a proprietary format, or on a device that's serving another purpose too.

Not trying to be a jerk or anything, just blabbing on about this topic. I think system/gear integration is a challenging aspect of recording rigs, since we are very much in a modular / each his own era of rigs.