Skip to main content

I have read enough about oversampling to know that the process is for improving audio quality whereby converters sample at a higher rate than the base frequency, which facilitates the use of filters with a gentler slope (curtesy of recording.org audio terms) And given the vast amount of practical things one needs to learn about recording, that is enough of an understanding of oversampling for me, for now.

What I am still vague on is how oversampling should be best used in my existing setup. I am recording at 24 bit 44.1 Khz. I have demo plugins (that are on my short list of acquisitions) that can oversample at 1,2, 4, and 8X. I trust that excessive oversampling is just a waste over and above my current settings, but what rate is the optimum for my settings? I would surmise it's 2X, but please don't ask about the convoluted logic that I used to derive this value. :tongue:

Comments

TheJackAttack Wed, 01/12/2011 - 22:30

Plugin over sampling is different than recording at multiples. Plugins often oversample to provide better resolution processing of the effect and to shove digital errors into non auditory range. Those of us that record at 88.2k/96k do so for better resolution during the mixing stage and also for the plugins. Obviously the higher the sample rate the more resources a computer must have especially as the tracks pile up. Plenty of great recordings have been made at 16 bit/44.1k so obviously it can be done. The important thing is use multiples of your final format. If you are going to have multiple destination formats it is often better to redigitize an analog signal. For example playing a finalized stereo mix (88.2k) at 44.1k out of a Fireface 800 (DAC) and redigitizing it with an HD24XR (ADC) at 48k for DVD use.

IIRs Fri, 01/14/2011 - 02:55

You need to get your head around the concept of aliasing.

Nyquist theory states that the highest frequency a digital sampling system can represent will always be half the samplerate, so this upper limit is known as the Nyquist frequency. The problem is, if you try to represent signals higher than nyquist they don't just disappear: they actually end up reflected back down below nyquist where they become a particular type of distortion known as aliasing.

Aliasing is the reason why your AD converters need filters: everything above nyquist must be removed before conversion, as there is no way to separate aliasing from the wanted signal after it has occurred.

Aliasing can also occur when processing digital signals however: any process that adds extra harmonics risks inadvertantly adding components above nyquist.

As an example, lets take a sine wave at 6KHz, and lets simplify the maths by setting a samplerate of exactly 40KHz, so nyquist is exactly 20KHz. I'm going to saturate the sine wave with a distortion effect to generate extra harmonics, which should appear at the following frequencies:

2nd 3rd 4th 5th 6th
12K 18K 24K 30K 36K

However, components above nyquist would be reflected back down as aliasing, so we would actually get the following (aliasing in red)

2nd 3rd 4th 5th 6th
12K 18K 16K 10K 4K

If we oversample by 2x nyquist will now be 40KHz, so we can go much further:

2nd 3rd 4th 5th 6th 7th 8th 9th 10th
12K 18K 24K 30K 36K 38K 32K 26K 20K

Notice that not only do we get another octave of headroom before aliasing occurs, we also get another octave above that where the resulting aliasing components are higher than the target nyquist, and will then be filtered out when downsampling back to 40KHz. ie: the orange aliasing components at 38K, 32K, and 26K will not be audible, so unlike the 1x version which aliases audibly at the 4th harmonic, the 2x version can go up to the 10th harmonic before aliasing becomes a problem.

The amount of oversampling you need therefore depends on two things: how much harmonic distortion is generated by the process, and the higest frequencies present in the signal you are processing.

Compressors will add subtle harmonics, especially with fast attack or release times, but if you are processing a bass guitar (for example) the harmonics generated will probably not go high enough to cause aliasing. But if you are compressing a drum overhead with lots of HF cymbal crashes you may find you need 2x or 4x oversampling to avoid a brittle harsh quality creeping in from the aliasing distortion.

If you are distorting the drum overheads however, perhaps with a saturation effect, or with a compressor or EQ that models analog non-linearities, you may well need 8x oversampling or higher. (The latest version of [[url=http://[/URL]="https://cytomic.com…"]The Glue[/]="https://cytomic.com…"]The Glue[/] provides an offline rendering mode with 256x oversampling! There is no way the compression would ever require this much oversampling, but the "Peak Clip" option uses clipping rather than limiting, ie: distortion.)

Obviously, the down side to oversampling is extra cpu load, and extra latency: not only do you have to process twice as many samples per second (or 4 times, or 8 times or whatever) you also need up and down sampling filters. If these filters are simplified too far they can significantly colour the sound, or smear the phase at the high end, but 'perfect' linear phase fiiltering is costly in terms of cpu, and will add a significant amount of latency. Some plugs therefore allow you to specify that they run at samplerate during realtime playback, but oversample when rendering (Auto mode in the Voxengo plugs). The Glue even allows you to specify different oversampling amounts for playback and rendering, eg: 2x for playback but 8 (or 256!) x for rendering.

I'm a little wary of that to be honest: when I render I like to get exactly what I was hearing when I mixed, and I don't like to assume that the host will always realise that the latency has changed. (Reaper seems to cope ok to be honest, but why introduce an extra variable?)

IIRs Fri, 01/14/2011 - 08:12

I seem to be in heavy work avoidance mode today, so I've made some examples (I've been meaning to test the new extreme oversampling options in The Glue anyway)

[[url=http://[/URL]="http://soundcloud.c…"]aliasing examples[/]="http://soundcloud.c…"]aliasing examples[/]

Warning: turn down your monitors! The files peak at about -20dBFS, but this will still be painful at high volume!

I started by generating a 5 second sine wave sweep from 1KHz to 10KHz, at 44.1KHz. I then ran this through The Glue, with the Peak Clip option turned on, and the makeup gain maxed at +40dB. In other words, distorting the sine wave almost into a square wave. Then I dropped the channel fader to -20dB to leave some headroom in the renders.

I rendered 4 files:

1: Bypass. The control, with The Glue bypassed. This is just a pure sine sweep as expected, and a wavelet transform looks like this:

2: Distorted x1. No oversampling. Sounds awful, and looks a mess as well:

3: Distorted x8. 8x oversampling: sounds pretty good, but (in this extreme case) some chirpy nasties start to creep in towards the end:

4: Distorted x256. The maximum 256x oversampling, which is only available for offline renders. This is essentially perfect: all the harmonics up to nyquist, and nothing else. However, note that this 5 sec file took approx 50 secs to render, while the rest took basically no time at all.

anonymous Fri, 01/14/2011 - 16:35

More very interesting information. There's nothing quite like audio clips complete with graphics to tie ideas together. Thanks for providing such a convincing demonstration. :smile:

The thing I am now curious about is how aliasing affects a complex signal. I would imagine that aliasing is readily identified in your examples because you are using a pure sine wave. Would it be fair to say that aliasing is less of a problem or less detectable when the signal is quite complex? If that sweep was instead comprised of a decidedly synthesized sound, would the aliasing be masked to some extent? Bear in mind I am not remotely trying to minimize the problem of aliasing, and I fully appreciate the power of a simple sine wave to clearly demonstrate the phenomenon, but I would like to better understand its importance in the recording of complex sounds and mixes.

Also, is this the Glue to which you refer: https://cytomic.com/products

IIRs Sat, 01/15/2011 - 01:35

jmm22, post: 361393 wrote: Would it be fair to say that aliasing is less of a problem or less detectable when the signal is quite complex?

No, not really. Remember, according to Fourier theory: every complex
signal can be broken down into constituent sine waves, each of which will act just like the single sine wave example above.

The problem is that aliasing distortion is not related to the wanted signal in any musical sense, so even a tiny amount of it can be audible. The Distorted x1 example actually gives quite a good idea of what to listen for, ie: the purity of the sine wave has vanished, and lots of unrelated rubbish has appeared. Complex signals do much the same things, so they end up sounding harsh and brittle, and more 'digital'.

jmm22, post: 361393 wrote: Also, is this the Glue to which you refer: https://cytomic.com/products

Yes.

BobRogers Sat, 01/15/2011 - 06:15

IIRs - A really good primer on aliasing. I'll probably use it in my partial differential equations class this semester. I usually do a short segment on sampling and discretization when I teach Fourier transforms.

The thing I have a hard time understanding with this is exactly what is going on at the digital to digital processing level. Your examples make complete sense if you are talking about a clipped 6kH analog signal that is left unfiltered. What is interesting is that the same thing happens with a 6kHz that is digitized at 40kH, and then clipped in the digital domain. Obviously the digitally processed signal is no longer the transform of a band-limited signal. But I'd be interested in it's relationship to the clipped analog signal. I would assume that's known.

x

User login