Skip to main content

Hey all,

I'm doing the planning for my new system.
I have magix Samplitude pro x, and I'm considering Pro Tools HD12.

Magix would be the main capture/compose/edit system due to high track count and clean coding.

PTHD would be primarily for mixing (mainly volumes and panning). Since it does 10 video tracks and 7.1 it's the unfortunate (expensive) choice.

Basically id like to pipe the edited audio from Sam into PTHD via the digital outs RME babyface -into- Focusrite Scarlett 18i20.

I've been told in the past 'once it's digital, it's digital' but after learning I've seen there's room for coding and error rates.

I'm just curious if this is a 'safe way' to move essentially finished tracks into the mix daw. PTHD does 64 audio tracks/10 video tracks at 192k. This is where I'll combine the audio and video.

I alsk will have magix movie edit pro premium which handles 4 camera angles.

So I'll be piping audio and video from the magix to PTHD.

Eventually I'll be able to afford Sequoiawhich does many things particularly on the broadcasting side that I'd like. But I'm
About 3 years away from that.

Basically is there a better way to pipe audio over than re-recording via the digital outs? Is simple drag and drop from my NAS drive better?

Is there a better software combo? A different method to do what I'm describing? I'm open to any ideas.

If PTHD isn't needed I'll get the regular version to open my old projects. It's only limited to 1 video track however.

Comments

DonnyThompson Tue, 11/01/2016 - 06:46

Brother Junk, post: 442870, member: 49944 wrote: What does this term mean?

In studio lingo, the word "tracking" is synonymous with "recording". ;)

"Poor tracking" would include things like using substandard mics, or bad mic placement, or a bad sounding room, or clipping the inputs of your audio capture device, or not having sufficient gain available for lower output mics to perform at their optimum...
It would also include electronic noise, ground problems, RFI, bad room reflection, etc.

It's basically anything that will degrade the quality of sonics of the tracking ( recording) process.

Brother Junk Tue, 11/01/2016 - 08:25

DonnyThompson, post: 442872, member: 46114 wrote: In studio lingo, the word "tracking" is synonymous with "recording". ;)

"Poor tracking" would include things like using substandard mics, or bad mic placement, or a bad sounding room, or clipping the inputs of your audio capture device, or not having sufficient gain available for lower output mics to perform at their optimum...
It would also include electronic noise, ground problems, RFI, bad room reflection, etc.

It's basically anything that will degrade the quality of sonics of the tracking ( recording) process.

Thanks. That's what I thought but wanted to be certain.

Kurt Foster, post: 442873, member: 7836 wrote: just kill me.

I'm not sure if you are annoyed by the stupidity of my question, (maybe it's unrelated) but if it is my response, it has a reason. I've been doing a lot of back reading on the site and I'll see things like "poor tracking", "less than adequate tracking", "tracking problems", "tracking daw"etc.

I just wanted to make sure that I was understanding it correctly. The statement was something like, "don't use a plug-in to mitigate poor tracking" (paraphrase). I just wanted to make sure I understood the intention of that advice.

kmetal Tue, 11/01/2016 - 14:43

DonnyThompson, post: 442754, member: 46114 wrote: Again, I'm not being rhetorical... my question(s) really are sincere; I'd like to know what my peers think... ;)

I don't blame the engineer as much as the bands. Bands just aren't as committed or tour seasoned many times. Look at the SOS article. The engineer was cobbling songs together from the singers iPhone voice memo recordings.

The singer lived in California the band mates and studio in England.

Now granted it's Coldplay and they're talented like em or not, but still.

I blame the bands for much of it. Laziness selfishness whatever.

All I know is I engineered dozens of bands in the same room w the same gear and house drums, similar setups. When the pros were playing I was a better engineer. The results were faster, easier, and better sounding. In short the better manes made me a rockstar engineer.

Literally same drum kit. All the eq and compression in the world couldn't do what a good drummer did to the kit.

Brother Junk, post: 442756, member: 49944 wrote: I wonder why some studio's only use externals? Or maybe they are connecting it with Sata 3....I never examined them from the back.

If they're 'only' using externals it's either a mistake on their part, they're using it for backup, or possibly samples.

Internal drives are better for the purpose of audio. W SSD now, Sata 3 is 'adequate, or slow' depending on things. That's both by the specs and in reality.

Brother Junk, post: 442756, member: 49944 wrote: To be completely honest, I still don't fully comprehend how to use compressors. I mean, I can make it work for me, but I've seen people who hear a track, take a second, and then set knee, ratio, threshold etc....just bam, bam, bam.

Compression is arguably the most difficult thing in audio engineering to understand. Took me over ten years to really know what I was doing.

Part of it is because compression isn't something in general that's obvious to hear. Most compression is done to be fairly transparent. Eq you hear, 3db of gain reduction you probably won't. In other words more often than not good compression technique is synonymous with subtly.

I highly reccomend you grab the 'mixing engineers handbook' asap. It's an excellent comprehensive, easy to follow book. It has step by step techniques for eq and compression and tips for helping your ears hear thes ebtings better. Plus great tips for mixing common instruments. Again w some quick steps.

It's a great read and I kept it next to the console for years when I started working at a pro studio.

https://www.amazon.com/dp/128542087X/?tag=r06fa-20

And the 'recording engineers handbook' if your tracking live instruments in any way.

https://www.amazon.com/dp/159863867X/?tag=r06fa-20

Brother Junk, post: 442756, member: 49944 wrote: Just out of curiosity, have any of you ever compared hw to the plug-in that imitates it? If so, what did you find?

I've used a hardware urie 1176 and many software versions. There not really close. There similar in a sense that jeans or jeans or t shirts are t shirts not sweatshirts.

But with compression in particular there is an element of live interaction w the performance that the hardware has. Also the way analog over drives is much more pleasing/different than the pluggin versions. They're similar in basic tendency like say 'punchy' compressor, but tone and everything else there not really that close.

I've also compared the API eq w the waves version, and again not really similar in sound. Digital artifacts aside, the pluggin was far more exhaggerated than the hardware. Much more of an audible effect. I think a lot of emulations do this, they exhaggerate the tendency of the piece they're emulating.

It's not fair to expect the same setting to sound the same when comparing hardware to software, but trying my best to match Similar levels subjectivly of boost and cut both the pluggin and hardware sounded unique to themselves.

Boswell, post: 442759, member: 29034 wrote: Gigabit ethernet gets its name from the propagation rate of the measured unit. "Giga" = 10^9 and "bit" = bit, not byte. So the rate on the ethernet cable is 10^9 bits/sec or 1,000,000,000 b/s. This corresponds to 125,000,000 bytes/sec or 125MB/s. Note the capital B when referring to bytes and the lower case b when referring to bits.

This is the rate that the bits within a packet of information would travel. Given that there will be multiple layers of wrappers round each packet and also gaps between packets, the end-to-end data rate of the payload could well be less than half the maximum bit rate of the transmission medium.

One of the difficulties in using ethernet as a digital audio transmission medium in a multipoint network is that the underlying hardware offers no guarantee (a) of the end-to-end transmission time, (b) packets will arrive in the order in which they were sent, due to being routed on a per-packet basis, (c) a packet will arrive at all and (d) a packet will arrive uncorrupted. Because of issues (b) - (d), one of the higher protocol layers takes care that a long message can be assembled correctly from shorter packets, often involving re-transmission of lost or corrupted packets. All this bodes badly for real-time audio, but is fine for transmission of audio data files. These problems do not apply to point-to-point ethernet links where there is no other traffic.

Can I have an autographed copy of torn book!? Excellent break down.

dvdhawk, post: 442816, member: 36047 wrote: DonnyThompson

This could probably be a separate thread too. But to give my answer to your question, I think you and I have a similar views on this.

If someone is getting good results, and having some success using a particular approach - I'm all for it, whatever works for you. The SOS guy probably acquired one widget at a time and applied them on top of what (one would hope) was a pretty quality recording to begin with -given the level of gear and expertise. Each new plug-in probably gives it something he finds .1% more pleasing to his ear. I would hope he doesn't need them for grand sweeping adjustments, or to compensate for poor tracking.

I try to use plug-ins very sparingly, but like a lot of you I usually have a pretty clear vision of where the mix is going to end up when I'm tracking - so I don't hesitate to print EQ, or even modest compression if I know that's going to stick. We all know that you can have your kick, snare, hi-hat, and bass guitar forming the absolute perfect pocket in the mix, but if you solo'ed any one of them they might (as Kurt Foster would say) 'sound like ass'. For me, it's always better and more efficient in the end, to spend an hour trying different mics and find the sweet spot to aim them, versus fighting the mix every hour after that. Most of the tracks, I might not need any EQ on them unless it's for a specific effect in a specific song. Better signal in -> better signal out. Garbage in -> plug-ins -> filtered garbage out. (no matter how many times the folks on the ISS filter the water…. they're still drinking urine).

That being said I do routinely use plug-ins as needed, primarily for EQ, compression, delay, and reverb. I'm always mindful that there's going to be a trade-off when algorithms are involved. Computational error, even if it's usually not noticeable, is sure to leave a cumulative pile of artifacts if you overdo it.

As far as the plug-ins themselves, I'm under no illusion that a $50 - $300 plug-in can perfectly emulate every nuance of a $30,000 piece of hardware, but that doesn't mean they're of no value. And as it's been said before, no two pieces of hardware are truly identical either. I've never had my hands on a Fairchild or Pultec, so how would I know? All I know for sure is that I like what a BF LA-2A plug-in sounds like and use it more than the stock compressor. I like the Pultec EQ plug-in that I have, and I use it in certain situations, but less often than the stock parametric in StudioOne.

I've personally been doing a version of the decoupled DAW thing for a long time when a project merits it. I have a buddy with some upscale hardware, and I do the editing / mixing ITB, and we pass that stereo mix in realtime through his rack hardware and record the resulting 2-track on a separate DAW at 44.1kHz. The capture DAW will usually have a limiter on the inputs, but basically we're setting levels as if we were going to DAT, or any other 2-track recorder. Ideally, we won't need to nudge any levels once it's been captured into the second DAW.

The core piece of hardware in that process being my buddy's Avalon 747. I haven't found anything yet that doesn't sound noticeably better just by virtue of passing through it - even before you engage any of its functionality. If it's from a cold start, you do have to let it warm-up for 30 minutes or so, but then it's rock-steady after that. The tubes give the sound instant gravity, the tube compression circuitry is great for what we do. It's not overly dense or dark, but I can see where some might not like it for classical music. Luckily, we're not recording the Frogtown Philharmonic. If you haven't used a 747 you might not believe the icing on the cake is the 6-band graphic EQ. The center-frequencies, the Q, and the amount of cut/boost of each band have been carefully tailored individually (by someone with exquisite taste) so that each band is perfect and incredibly musical. You can sweeten the track, you can completely change the character of the track with radical settings, but you cannot ruin a track (even if you're trying to for sake experiment) with the stupidest comb-filtery looking 3-up / 3-down EQ settings you can think of. The character will change, but the mix will not come undone on anything we've tried.

Well said. Judicious use of effects analog or pluggins is so essential.

bouldersound, post: 442817, member: 38959 wrote: A decent compressor plugin is way better than a run of the mill analog compressor. Really high end hardware compressors do things that can be hard to emulate digitally. Actually, all compressors to things that are hard to emulate, but what normal compressors do isn't worth emulating.

Agree for mixing. For tracking I've found dbx in particular to be quite good on some things. Either in the instrument/amp chain, or mic signal chain.

The press is eureka channel's compressor is also be very useful and is very transparent.

KurtFoster Tue, 11/01/2016 - 15:00

kmetal, post: 442892, member: 37533 wrote: All I know is I engineered dozens of bands in the
same room w the same gear and house drums, similar setups. When the pros were playing I was a better engineer. The results were faster, easier, and better sounding. In short the better names made me a rockstar engineer.

+1(y)

bouldersound Sat, 11/05/2016 - 08:35

kmetal, post: 442892, member: 37533 wrote: Agree for mixing. For tracking I've found dbx in particular to be quite good on some things. Either in the instrument/amp chain, or mic signal chain.

I admit to having a few channels of mundane dbx compression on my front end. But they do get bypassed fairly often. There's also a Pro VLA and a Drawmer 1960 to track through.

kmetal Sat, 11/05/2016 - 19:38

Dbx has a compressor w just one slider on it. No other controls. That thing has a ton of sack on bass guitar. Love that little thing.

The cool thing about Dbx is it does have a sound. For better or worse. The 160 is moderately priced and you see them all the time in commercial studio racks. Some cheaper gear has a certain charm. Again for some things it works others it doesn't.

Insold my 166xl recently in my gear purge. I gave that one w the one slider to the studio. I think I sold then166 for as much as I paid and the other one was given to me so no loss.

JayTee4303 Mon, 12/26/2016 - 07:18

What a great thread! Every once in a while, other forums will touch on some of what's here, but the response is mostly "huh wut?", so there's not much back and forth on these topics. Four room facility here, 11 or 12 PCs last time I counted, we use up to five in "DAW-farm" configurations. Usually not all at once, it just depends on what's going on. Control room hosts a pair of I7 4790s or 4970s, memory is the first thing to go. One video, one audio.

Audio utilizes a MOTU PCIe 424, with three 2408s and an HD-192. Not exactly Myteks, but the massive ADAT routing capability serves some very useful purposes, and we get 120 dB dynamic range, in and out, from 10 channels on the HD-192, plus AES which is near permanently populated with the Bricasti M7. Video has an M-Audio C600, just for monitoring, we pull a SPDIF stereo feed out to the Audio box and monitor from there. We will move into X.1 audio for video, if and when revenue supports it, but haven't looked into better audio so far. Instead audio arrives via file transfer, after mixdown and mastering.

On networking, in addition to QOS concerns with Dante and AVB, make sure you create a non-blocking backplane architecture with your routers and switches. 8 or 24 Gigabit ports seems great, but if your 24 port Gigabit switch or router's backplane can't handle 48 (full-duplex) GB datastreams at once, you might as well be running Fast Ethernet. Or worse...somewhere around 55-60% saturation, ACKs and RESENDs will bury your network anyway. Ethernet, like H20, is an incompressible medium!

We use a flexible networking schema, to keep core computers unexposed to the net, while maintaining update capability when necessary, with internetworking always available. Basically, frequent net connected boxes (we call them DMZ machines. not strictly accurate, but illustrative)r un DHCP, from a router/server, with IP addys limited to the x.y.z.3-100 range. Dot one is the gateway of last resort, obviously, with dot-2 and dot-3 reserved for wireless access points. Core machines, mostly isolated from the net, run hard coded IPs, on the exact same x.y.z. schema, but limited to dot 101 thru dot 255. Anytime a single cat-5 cable connects the two networks, the IP schemas seamlessly merge, or, well....they COULD if we put in the necessary hours on the router table codings, hey...it's on the list, we'll get to it! In practice we usually have to do some pinging, from a shell window, to get everybody on the same page, after merging the Core and DMZ networks.

For digital routing, we use a Z-Systems Optipatch Plus. 30 Toslink ports fully leverage the 2408's and 424's significant ADAT I/O. One sample thruput latency, and it reclocks the signal, extending Toslink's inherent 6 meter limitation w ADAT, to 12 meters, over plastic fiber. We are beginning to find the limits of the device as all 30 ports are now populated, but each of the 2408's triple ADAT ports , offer "sidechain" ADAT routings for dedicated processing pipes, where we don't need the Optipatch's routing flexibility.

The Tracking Room DAW, originally the sole computer, is an older I7 2600, with a Profire 2626 gateway to 8 analog and dual ADAT ports. The whole room is set up to be operated by an engineer/artist, standing up, wearing a guitar or bass. Good for when a couple other players are over and there's no designated engineer. For the most part these days, it handles VSTi hosting for guitar or keys, or both, once we get options pared down to choices.

The Live Room DAW is an older Core Duo, under very light duty, simply passing audio thru it's pair of 2408s. There's a subsystem for each major instrument, which we streamlined for "right now" composing work, with patchbay options to simply swap out our gear for client gear, while keeping the facility routing setups intact. Our V-Drum kit comes in via a "sidechain" (not thru the Optipatch) ADAT pipe, from an Octapre Dynamic. Acoustic drums use the patchbay to access the Octapre. The Core Duo is also where we capture MIDI from the V-Drums or triggers, one of the the usual times we'll sync up multiple DAWS. We've long seen the utility in being able to do drum replacement right here, or even drum VSTi hosting, and since the Core Duo doesn't begin to cut it for either app, it is next on the block for replacement. Maybe a mid grade i7, direct swap, but we are beginning to need more expansion slots on the video box in the CR, so we might go dual Xeon/Titan there, and move the current vid box to the LR, we'll see.

There's a Core Duo in the vox booth, w ADAT connectivity, but we house our better pres in there too, so we backed that up with an XLR snake into the CR, for operations with zero fan noise. The idea is that songwriters can write in iso there, then pipe offerings out system-wide, but it hasn't found much use to date. The rest of the PCs are single cores, or i5 laptops, the singles are support, (internet, playback, realtime outboard control via MIDI and MIDI synth programming), and the i5 laptops are used with the live rig.

Long enough for an initial "hey!' in this great thread, you can check out pix and more info on our webpage: www.e4mm.com .

Ok, one more thing, a question, sort of... The Bricasti lives on AES in the CR. We are firm believers in the un-synced dual DAW approach, stemming back to the legendary SOS article "Does Your System Need A Master Clock?" Master clock is Master Clock, while Chase Sync is always chasing. Reaching into a gray area here, bear with me. Using the M7 as a hardware insert, it loses a LOT of its brilliance. Brilliance heard while monitoring. We don't get that brilliance back unless we tap the M7s AES outs, as mono inputs, re-recorded live into Sonar. One more time, because I know I'm not explaining well.

Vox, in the CR, cans on, I monitor the mic>pre>HD-192>M7 via DSP. My cans hear the AES input from the M7, and whatev I'm playing back from Sonar. Sounds Bricasti Brilliant.

Recorded vox, run thru AES to the Bri and back into Sonar as a hardware insert, NOT rendered, sounds dull and lifeless.

Recorded vox, out thru AES into the Bri, re-recorded onto two mono tracks, hardpanned L and R, the brilliance is back.

What gives? Is this a latency comp issue? Or is it Chase Sync chasing? The Bri processes at 96, according to Casey, and being a video houise we run 24/48.

Or is is a combination of these, or something else entirely?

Hard facts, AND wild speculation appreciated.

:-)