Monitoring with Magix Samplitude Pro X
In this video I show how I do my monitoring setups with RME Totalmix FX and also using Samplitude alone for those who don't have a virtual mixer for their interface.
One of my Youtube Viewer bought a PreSonus Q2626, it seems to not have a virtual mixer and aimed to be driven by studio one.. I hope this will work with samp as well.
Let me know what you think. How do you manage monitoring outputs, with or without multiple signals for headphones..
Thanks for Sharing K, I had to deal with many situations to
Thanks for Sharing K,
I had to deal with many situations too. From doing live sounds with many monitors mixes on stage or doing FOH at the same time, to doing recording in studio.. Usually even recording live, I try to give seperate headphone mixes. Doing it with Totalmix FX isn't like having a mixer, but I manage to please the musicians.
I too mix as I go and build the mix while tracks are being recorded.. but, I sometime wonder if I shouldn't leave everything raw (levels and panning only) then mix once complete.. If I ever be allowed to have a band in the studio again, I might try this approach.. ;)
In the studio I work in most, there's a mid-sized analog console
In the studio I work in most, there's a mid-sized analog console as the front end of the recording system. The studio is a combination rehearsal, performance and recording space, so the way I have things set up is slightly unusual. I have four channels of aux sends feeding stage monitors, and one stereo aux send as the control room mix. (Most of the aux sends are post-fader, so I mostly keep the faders at 0.)
There's a rack of hardware compressors, reverbs etc., and a patchbay, that I use while tracking. I have the compressors patched into the record path of a number of channels. The reverbs can be heard in headphones, but they're not recorded. I do use the channel faders on the reverb returns because I want that reflected in all the aux sends.
The control room mix also feeds the headphone amp, which sends signal to panels around the recording room. When tracking, I hear what the musicians hear. The nice thing is that the control room mix for the headphones is not affected by soloing, so I can solo inputs in the control room while the musicians get uninterrupted audio. I do start mixing during tracking to some degree, but I'll accommodate the needs of the musician when required.
I have two stereo outputs from the interface to the mixer. One goes to a pair of channels so I can route it to any aux send, including the stage monitors, control room and headphones. I route the DAW mix to this output during tracking. The other stereo output from the interface goes to an aux input on the console, which avoids a lot of unnecessary circuitry like eq etc., and that's the one I use during mixdown.
Good vid. ITB ive traditionally used hardware monitoring, and us
Good vid. ITB ive traditionally used hardware monitoring, and use the mix balance from the faders. In 21 years ive never had to setup multiple mixes via a daw, which is kinda crazy to say. Ive either tracked 1 person at a time, with them on speakers, or headphones. Or everyone was live in the room, no headphones.
At the studios i used either the apogee monitiring app, or the D8B console. I liked the console because it has built in dsp, and 2 seperate headphone mixes via auxes. It was fast, and i could use "fader flip" to control auxes with the faders. What was best was i could change the mix in for the control room without changing their aux mixes, by using a feature that "locked" their mixes. So i could solo stuff, eq it, whatever while they were tracking and they didn't hear it. Super useful for dialing in mixes while tracking, or zooming in on problems.
I really disliked the apogge monitoring, because it relied on hardware monitoring mode and the apogee dsp. So if you thought your done tracking, and you added plugs to tracks, then the artist wanted to overdub, the switch back to hardware monitoring caused level and tone changes cuz the plugs were now bypassed. Plus the dsp in the apogee didn't have a pluggin version, so it was just for monitoring. Which to me, was wasted work. You dial in delay and verb that the artist responds to, and then have to recreate it again using native plugs.
These experiences led me to the conclusion that either a low latency native system, or dsp that has daw plugs was really the best way to go (for me). Or mixing with a console, but that's not in the cards right now.
There's enough re-work going on in mixing and editing, i really don't want to have to recreate the monitor mix. I like the idea of working off what inspired you, and dialing in sounds as you go.
Since not all native plugs have 0 latency mode this "mix as you go" workflow does have some compromises. With dsp or native your limited to a certain selection of plugins.