Skip to main content

Most stereo synths, samplers and keyboard and rack module versions seem to have a inherent problem.

The have left and right outputs, and usually the left functions fine as a mono output if the right socket is left empty - however, if you take a stereo feed to your recording or PA system, and for whatever reason sum them together either by routing or simply centring both pan controls - odd things happen. I've had this happen on a Roland and two Orgs, and now a Kurtsweill. Things like pianos suddenly have a slow tremolo very distinctive, and other sounds have a slow chorus. Not really a phasing, but a slow pulsing.

Last night I noticed it with an expensive stage keyboard when working through a P16 personal mixer. As the keyboard went in via BSS DIs, which have a polarity reversal switch, I assumed this would cure it, but it made no difference whatsoever. The internal 'monoing' worked fine, and the solution was to simply use this and scrap stereo width totally.

I've never seen this mentioned on forums, but having now experienced it quite a few times, I'm surprised. To produce the horrible sound means there must be an element of pitch shifting involved, and then this when summed causes the weird effect - but surely a piano should't have any chorus style pitch shifting in the first place?

What is going on? How does the internal summing NOT do it?

Tags

Comments

audiokid Thu, 08/31/2017 - 09:10

Hey Paul, spot on. I too have noticed this so I avoid stereo patches that don't sum well. Unfortunately most stereo patches sound great but never mix right, never translate.

I find two areas interesting here:

  1. stereo effects are a lot of what makes a patch impressive but rarely translate into a mix with the same impact.

  2. Most patches sound surprising the same without stereo reverb/ effects.
    The most valuable lessons I've learned from synths and effects is how important mono is, how misleading effects are with the new and improved keyboards/ samples and how powerful good reverb is when applied methodically on a two buss.

Boswell Thu, 08/31/2017 - 09:27

This is fairly common with keyboards, and I've usually put it down to intentional amplitude and phase modulation between the L and R outputs to create a swirling effect. There is a class of keyboard that detects when you just have no plug inserted in the R output, and these not only do a reasonable job of making a usable mono signal in the L side, but also inhibit the phase modulation, leaving only the amplitude modulation (if any).

Having both L and R jack plugs inserted but using only the R output as mono sometimes works OK, and has more of the swirly character of the stereo out, but has the limitation that some of the sounds may not be complete, as you are hearing only one half of the spatial field.

BTW, I really do mean continuous phase modulation and not polarity change.

Tony Carpenter Thu, 08/31/2017 - 09:28

I definitely am into turning effects off on my keys now. Chris pointed this out while I was working on the minstrel and Marie. The great thing is with my Motu, if I really need to have a reverb for the vibe, I can add it via the control panel while recording without it being printed to the track.

Of course you can always run effects in software monitoring as well. Ultimately the choice is a bit like, my guitar sound with effects of the sort I want.. is that what I want.

Next step sometimes too is to use a stereo image plugin and narrow it down. That being if you really want those effects on the keys. But yep, it's amazing how out of phase or non mono friendly certain patches are.

paulears Thu, 08/31/2017 - 10:55

Yep - that's the problem. With the left and right maybe 20m apart, stereo that wobbles from speaker to speaker is horrible. The strange thing is that it's only now I'm using personal on stage mixers that people are complaining, and telling the MD his expensive synth is the culprit and not the PA doesn't get received very well. It does seem strange that the need for mono, or limited width stereo has been forgotten. Glad it's not just me imagining it!

audiokid Thu, 08/31/2017 - 11:43

Kurt Foster, post: 452397, member: 7836 wrote: how important is mono compatibility? on how many places/ devices would someone listen in mono? i can't think of any.

I personally avoid unnatural stereo keyboard effects because they more often than not distract from the main focus of a mix. After years of mixing my stuff and other peoples stuff there is nothing more frustrating than trying to mix unreal to real world in the same mix. This also applies to the same reason I will degrade awesome samples being used with lowfi home recorded tracks. The idea to me is to find the weakest link in a mix and degrade to rather than trying to mix weak tracks to well produced samples. What a big time waster that never ends up sounding glued. The fastest way to that is to get all the tracks somewhat equal and then start mixing.

My general rule to people asking me this very question is this:
Just because it sounds cool, doesn't mean your audience will hear it the same way, including... will it serve well with other tracks of lesser quality or punch per-say. IMHO, Mono tracks summed well into a common stereo field sound louder, fatter and glued through more sounds systems.

That being said, if you have great tracking gear, big fat synth sounds blend in better so you have more options as well.
imho

Davedog Thu, 08/31/2017 - 15:17

My regular keyboardist uses Korg Kronos as his main board for most things and we noticed this after some tracks were put down and I was going into the mix. It wasn't with all settings only particular ones. We tracked with two mono tracks from his L/R and this solved some of the problems. Mind you, the Kronos has less problems with this than most. At mix it was completely solved by making the two mono track a stereo and flipping the phase of the R side output before they were made a stereo track. It was very noticeable on some of the grand piano settings.

audiokid Thu, 08/31/2017 - 15:44

Davedog, post: 452403, member: 4495 wrote: My regular keyboardist uses Korg Kronos as his main board for most things and we noticed this after some tracks were put down and I was going into the mix. It wasn't with all settings only particular ones. We tracked with two mono tracks from his L/R and this solved some of the problems. Mind you, the Kronos has less problems with this than most. At mix it was completely solved by making the two mono track a stereo and flipping the phase of the R side output before they were made a stereo track. It was very noticeable on some of the grand piano settings.

I relate to, Dave. I have a Kronos X 88. They are stellar keyboards, one of the best I've used but too have this issue. The issue however isn't the problem of the keyboard itself, its simply what happens from effect processing on what Bos mentions with wave manipulation gone wild.

Something I periodically talk about relates to this as well. I chuckle when I read about the latest and greatest violin, orchestra and so on patches. To me, since the day we could sample a violin and place it over a keyboard, the original sample has always sounded like a cold and boring violin. What's improved (aside from SR) is the duration of the sample we can do, the processing functions and the improved reverberation processing we can add to give it the wow factor. Get a Bricasti and those old 12 bit samples sound pretty excellent.
This is another reason why a Bricasti is so amazing. Its actually a stereo processor that sums well mono.

I suspect one day we will have a Bricasti reverb processor equivalent in everything.

bouldersound Thu, 08/31/2017 - 16:07

Kurt Foster, post: 452397, member: 7836 wrote: how important is mono compatibility? on how many places/ devices would someone listen in mono? i can't think of any.

I find that mixes that are mono compatible just sound better.

My smartphone speaker is mono, so if I want to share something with someone (usually a video), I'll play it with the phone's internal speaker. The local NPR station on FM is mono. It's mostly talk but sometimes they review music and play samples. In fringe areas the receiver may switch to mono or low pass the difference channel. PA systems are often mono (though I ran mine stereo), and don't expect to get stereo monitors unless it's a big production. IEM systems are often mono. Distributed audio systems may be mono.

kmetal Thu, 08/31/2017 - 17:29

Does this mono/stereo artifact occur within VSTi's as well, in general? If so is it the same case of the processing causing the artifact? Could this come into play with both sampled instruments (artifacts from hardware making its way into the sample) and from VSTi's that model the hardware to create the sound...?

As far as hardware, this convo is making a good case for recording the midi data along with the audio output, which I usually did in case I wanted to change the sounds later, but it never occurred to me about the processing the keyboardist may have on during tracking.

audiokid Thu, 08/31/2017 - 18:53

kmetal, post: 452406, member: 37533 wrote: Does this mono/stereo artifact occur within VSTi's as well, in general? If so is it the same case of the processing causing the artifact? Could this come into play with both sampled instruments (artifacts from hardware making its way into the sample) and from VSTi's that model the hardware to create the sound...?

As far as hardware, this convo is making a good case for recording the midi data along with the audio output, which I usually did in case I wanted to change the sounds later, but it never occurred to me about the processing the keyboardist may have on during tracking.

imho, this issue occurs with anything artificially processed. You have to be smart about the effects used. VSTi is no different than hardware keyboards. The key to me is to be aware of what processors do and then be thinking about how it all glues with counterparts.
My rule of thumb, if it sounds better than the norm, there is a reason for that and that comes at a price.

Over the years of being in this game, mono tracks mixed well, with one common process on the 2bus sounds better more freq than individual tracks with layers and stacked effects. You can't go wrong if you think like a live musician. Meaning, how many rooms is the audience really sitting in when they are listening to a performance. To me its always just one.

JayTerrance Sun, 09/03/2017 - 10:59

I usually don't care whether the keyboard sample sums well to mono. As long as it sounds good in stereo. The reason is I mostly re-amp/re-mic keyboards and synths in stereo and then depend on my mic configuration for what width(narrow to wide) I'm looking for that fits in the mix properly. Of course this is the necessary method for keyboard/synth recording, but for live PA sound I can see where your problem of mono compatibility comes into play as each individual channel (left/right) is quite boring compared to the composite stereo sound.

JayTerrance Sun, 09/03/2017 - 21:33

I send the l/r outputs of my keyboard(s) to 2 Radial(& other brands) DI's. That goes into my mic inputs on the preamp, then into my interface and then out to 2 active monitors in a live room. Then I use a mic configuration to capture the desired width and depth and pan.

The main reason I do this is that capturing just the left (or just the right) mono signal out of a keyboard or synth typically leaves one with a bland representation of the natural stereo keyboard sound. Some of those keyboard/synth stereo samples are near impossible to recreate in your mix from capturing just a mono signal(whether L or R). Plus, by re-micing (reamping might not be the best term because I dont have an amp involved usually for keys/synths...although you could) I can really hone in on capturing the correct width/depth/pan that best fits each particular mix...and also add a hint of natural air to the capture.

kmetal Sun, 09/03/2017 - 22:50

JayTerrance, post: 452482, member: 49019 wrote: I send the l/r outputs of my keyboard(s) to 2 Radial(& other brands) DI's. That goes into my mic inputs on the preamp, then into my interface and then out to 2 active monitors in a live room. Then I use a mic configuration to capture the desired width and depth and pan.

The main reason I do this is that capturing just the left (or just the right) mono signal out of a keyboard or synth typically leaves one with a bland representation of the natural stereo keyboard sound. Some of those keyboard/synth stereo samples are near impossible to recreate in your mix from capturing just a mono signal(whether L or R). Plus, by re-micing (reamping might not be the best term because I dont have an amp involved usually for keys/synths...although you could) I can really hone in on capturing the correct width/depth/pan that best fits each particular mix...and also add a hint of natural air to the capture.

That's pretty cool man, gonna try that someday. Especially since my stuff is all vsti, it's nice to move some air around. Good stuff.

paulears Mon, 09/04/2017 - 01:18

There are so many ways of doing the live sound now - most don't cause the player any grief because frankly what the FOH engineer does with it is never heard by the player - but they need certain things to play properly - the same issues apply in the studio.

Many players will use keyboard splits to have two or maybe three different sounds spread out, and while sometimes it's a stereo mix, as in each one spread to various points left to right, other times it could be one sound left, one right and then maybe a pad sound spread - whatever the musician needs. If the player is splitting lots of left and right differences, then they need a stereo monitor system - which would often be two identical combo amps, or two amps/two cabs and now maybe two active wedges. DI feeds to the FOH or recording people. Where players have multiple keyboards it's common live to use a small mixer to blend them all together into a left and right - and then this goes to FOH/Recording, rather than individual feeds - although some use the direct outs to let somebody else do the blending and eq.

My band are a tribute band and we play theatres and festival type shows mainly, with the theatre shows done with our own PA, which we record live material. Our problems/solutions came when we added personal monitoring for the theatre shows, yet had to use supplied PA in the festivals, where despite being a huge stage, you have very little time to sound check. Often the keys player would provide the L and R to the PA companies DI, and then the only monitoring would be what comes back through the wedges. This is where the mono compatibility rears it's head. Some patches would seem in the monitors, very quiet, others much louder. Our keys player uses a volume pedal before the DIs, and added a feed from the left channel that he uses to a stand mounted hotspot monitor, so if the FOH people ignore his request to only send him the left channel to the wedge and NOT both, he has all the sounds. In the studio you can usually record with stereo monitoring, so don't notice the mono incompatibility, and I suppose that as everyone listens in stereo nowadays, this defect is very rarely noticed. As we do a lot of live sound for other people, I stopped doing left and right hard panning on stereo sources, because the trend seems to be to do some very wide panning on the keys when they do splits - and clearly a sax sound from only the left and the synth from only the right in a room that is very wide sounds very odd - as the people on the wrong side lose some of the sounds - probably noticed with a band who did 'Final Countdown' - and the starting bass synth sound came only from the left - which isn't that obvious with bass, but when that brassy sound appeared it was only on the other side, which was weird. Since then, I start panning keys at about 10 oclock left and 2 oclock right. This occasionally produced what I thought was just a poor balance, but shunting both pans to 12 oclock sometimes made a sound totally vanish - or just leave a swishy reverby sound.

As few people ever listen in mono, maybe the manufacturers just gave up on mono compatibility because stereo could sound so good?

DonnyThompson Tue, 09/05/2017 - 13:50

audiokid, post: 452402, member: 1 wrote: This also applies to the same reason I will degrade awesome samples being used with lowfi home recorded tracks. The idea to me is to find the weakest link in a mix and degrade to rather than trying to mix weak tracks to well produced samples. What a big time waster that never ends up sounding glued. The fastest way to that is to get all the tracks somewhat equal and then start mixing.

I recently worked with a Sampled piano that was played with live tracks - drums, bass, guitars, etc.
I found that narrowing the sampled piano's width, and then running it through a saturator plug with just a hint of wet in the mix ratio resulted in it sounding more cohesive and "natural" with the other tracks. I think it's also important to choose the right piano sample as well. For rock/blues, As opposed to using a 14" Bosendorfer or Steinway sample, you might want to consider a smaller piano, perhaps even an upright. While the "big" piano samples sound good on their own, and definitely have a certain wow factor, they're not necessarily the right choice for a rock track, as the wow part can work against you and distract you when listening to the whole mix.
IMHO of course.

kmetal Tue, 09/05/2017 - 17:30

For organ chord stabs in a sublime/jack Johnson style tune, we played the keyboard through a fender amp and mic'd it close and ambient. It gave a nice edge to the sound. I can't say it sounded 'more real' but it did sound more like a performance in the room than just DI. Goodness knows what sort of cancelizations occurred, but I'm much more aware now than I was 5 years ago about that stuff.

We got the sounds while the song was playing and so I think that was a big help in the mix, since we had a decent rough in.

x

User login