Skip to main content

What sample rates are people using to record? I want to use more than 44.1, probably 96 kHz. What is the audible difference between 96 and 192, and when, if, is 192 kHz recording necessary?

Later I will be recording audio for video and if 48 is standard for dvds, should I keep it there for syncing purposes? Will there be any advantage to recording at the higher rates for video?

cheers,

a

Comments

pcrecord Mon, 08/17/2015 - 18:49

I guess one would use 192khz when the best resolution is needed. Maybe in mastering work.
In the past, I going to higher resolution to record and mix was needed to get better plugin processing. But nowaday, most DAW software and plugins are able to upscale their work.

I'm recording mixing and mastering in 24bit 96khz. I found it sounded the best for me and the workflow I use.

An interesting point is that some converters sound better at 48 than others at 96khz. (that doesn't meen they don't both sound better at 96)
I understand it like a capture unit on a DSLR. Some 18megapixels point and shoot camera aren't doing near as good as some 12 megapixel Pro camera...

I think that you should decide by yourself by experimenting the difference. Then conclude if the extra computer ressources and disk space is worth to you ;)

kmetal Tue, 08/18/2015 - 08:50

Recording at high sample rates has advantages if you setup can handle the workload.

Higher sample rates mean less latency in VSTi and I believe in general. Also when your at the maximum sample rate, your future proofing your work (more important in commercial situations, where re-issues, re masters, and numerous formats of the same song) are required.

It's my feeling that the differences we here between tape and PCM recording has to do with linear vs non linear. This is part of what contributes to 'grainy' sound. I don't have any real proof, it's just theory. But in between all those samples , is essentially nothing, no data.

The more samples in the data, I believe the better. I think a lot of the exteremly precise digital eqs and compression are highlighting some of the 'no data' or 0's, in the coded audio.

I record at the highest sample rates I can. Usually it's 24/44.1 at the studio, because most destinations are CDs or MP3s.

When you get into out of the box SRC, you eliminate so,e of the 'odd math' artifacts from say going from 96 down to 48. I've never really 'heard the difference' when I tested this on my old m-audio/Mackie HR setup. But I'm an impure purist of sorts, and if I can capture more samples of the moment, I will.

DonnyThompson Wed, 08/19/2015 - 03:13

Here are some Excerpts from an online article I found; "The Science Of Sample Rates" by Justin Colleti.
I found them interesting, and thought that these were appropriate to share on this thread.

source: http://www.trustmei…

Improvements at 44.1: Fixing the Filters

The earliest digital converters lacked well-designed anti-aliasing filters, which are used to remove inaudible super-sonic frequencies and keep them from mucking up the signals that we can hear. Anti-aliasing filters are a basic necessity that was predicted by the Nyquist Theorem decades ago. Go without them and you are dealing with a signal that is not bandwidth limited, which Nyquist clearly shows cannot be rendered properly. Start them too low and you lose a little bit of the extreme high-end of your frequency response. Make them too steep and you introduce ringing artifacts into the audible spectrum.

It’s a series of trade-offs, but even at 44.1, we can deal with this challenge. Designers can over-sample signals at the input stage of converter and improve the response of filters at that point. When this is done properly, it’s been proven again and again that even 44.1kHz  http://www.aes.org/… can be completely transparent-in all sorts of unbiased listening tests. But, that doesn’t mean that all converter companies keep up with what’s possible. Sometimes different sampling rates can and do sound significantly different within the same converter. But this is usually because of design flaws – purposeful or accidental – at one sampling rate or another.

When More is Better: Making The Filters Even Better

With all that said, there are a few places where higher samples rates can be a definite benefit. In theory, rates around 44.1kHz or 48kHz should be a near-perfect for recording and playing back music. Unless the Nyquist Theorem is ever disproved, it stands that any increase in sample rates cannot increase fidelity within the audible spectrum. At all. Extra data points yield no improvement.

In practice, trade-offs necessitated by anti-aliasing might cause you to lose a few dB of top-end between 17kHz and 20kHz – the very upper reaches of the audible spectrum. Few adults over the age of 35 or so can even hear these frequencies, and there is currently no evidence to suggest that even younger people are influenced by frequencies above the audible range.

When properly designed, a slightly higher sample rate may allow us to smooth out our super-high frequency filters and keep them from introducing audible roll-off or ringing which may be perceived by younger listeners (if they’re paying any attention.)
But, be careful of designers who go for super-sonic sampling rates and set their filters too high. If you include too much super-sonic information in the signal it becomes likely that you will introduce super-high frequency “intermodulation distortion” on playback. It turns out that in many cases, we can hear the sound of higher sample rates not because they are more transparent, but because they are less so. They can actually introduce unintended distortion in the audible spectrum, and this is something that can be heard in listening tests.

When More is Better: Oversampling for DSP

When you go beyond the mere recording and playback of sound and into the world of digital signal processing, it becomes clear that higher sampling rates actually can help. But the solution might be a different one than you’d expect.
When it comes to some non-linear audio processors like a super-fast compressor, a saturator, a super-high-frequency EQ, or a vintage synthesizer emulation, oversampling can be a major benefit. This in and of itself might seem like a great excuse to immediately jump up to 88.2 kHz or higher.

But not so fast: most plugin designers, knowing this full well, have written oversampling into their code. Even in a 44.1kHz session, plugins that benefit from oversampling automatically increase their internal sampling rate. To gain the full benefits of this, it’s important to note that the audio doesn’t have to be recorded at this higher sample rate, it’s just the processing that must happen at the higher rate. So, unless you are using older or inferior plugins that have taken shortcuts and neglected to include oversampling in their code, then converting an entire audio session to a higher rate would make your mix take up more processing power without adding any sonic benefit.

But don’t take my word for this – Try it yourself. Up-sample an entire mix and then try a null test with your original file.

In my experience, the only things that will fail to null are A) Processors that have a random time element – like modulation effects – and cannot null B) Plugins that have different delay amounts and will not null until you compensate for the delay, and C) Processors that neglect to include oversampling when they should. Very few of the latter still exist. And thankfully so, because oversampling has led to huge improvements in the quality of digital processing. Finally, after decades of people trying, there are actually some software compressors that I like. A lot.

When More is Better: Converter Design

Dan Lavry is one of the most respected designers of audio converters in the world, and a die-hard opponent of ultra-high sample rates like 192 kHz. But , even he would be among the first to admit that some slight increase in sampling rates can make designing great-sounding converters easier and less expensive. In an influential and now-famous white paper, he writes:

The notion that more is better may appeal to one’s common sense. Presented with analogies such as more pixels for better video, or faster clock to speed computers, one may be misled to believe that faster sampling will yield better resolution and detail. The analogies are wrong. The great value offered by Nyquist’s theorem is the realization that we have ALL the information with 100% of the detail, and no distortions, without the burden of “extra fast” sampling.

“Nyquist pointed out that the sampling rate needs only to exceed twice the signal bandwidth. What is the audio bandwidth? Research shows that musical instruments may produce energy above 20 KHz, but there is little sound energy at above 40KHz. Most microphones do not pick up sound at much over 20KHz. Human hearing rarely exceeds 20KHz, and certainly does not reach 40KHz.

“The above suggests that [even] 88.2 or 96KHz would be overkill. In fact all the objections regarding audio sampling at 44.1KHz … are long gone by increasing sampling to about 60KHz.

To him, the issue is not about whether 44.1kHz is the last stop. It’s clear that it rests on the cusp of the point of diminishing returns, and that by the time you’ve reached 60 kHz you’ve exhausted all the theoretical benefits you could ever add. The real benefits to be had are the ones that come from improving implementation, not from ever-increasing sample rates. With that said, if you think it’s better to overshoot your mark and waste some power and stability rather than undershoot it and potentially leave some theoretical audio quality on the table, then switching from 44.1kHz to 88.2kHz seems like a valid argument. Properly designed, 88.2kHz shouldn’t be a huge improvement, but it can make good design easier and less expensive, and it shouldn’t hurt either. But beyond that, things start to get a little sketchy.

You can read more here: http://www.trustmei…

Boswell Wed, 08/19/2015 - 09:26

The old articles based on steady-state hearing models still make interesting reading. It's not until you extend the work to deal with transients that you realise that the human hearing mechanism is not a simple system that can't hear anything above a certain number of Hz.

From a 2012 post:

The thing about hearing tests showing that you can hear only to a certain number of KHz is that they are performed under pseudo steady-state conditions using sinewaves, and don't take any account of the hearing system's response to transients and non-sinusoidal waveforms. I have a theory that these last two not only go up to higher frequencies but also fall off less rapidly with age. My last audiometric tests gave a figure for my ears of around 15KHz, yet I can easily tell the difference between a 7KHz sinewave and a 7KHz squarewave, where all the differences between these two are at 21KHz and higher. I can also tell the difference between the same transient waveform sampled at 44.1KHz and 96KHz, particularly where the source is something like a pair of Tingsha bells. Spectral analysis of the 96KHz waveform of this type of bell shows energy going up to 30KHz, with the limit probably being due to the microphone and not the bell. If I can hear these differences on a single demonstration sound source, I argue that they must also be present when my ears are presented with a complex dynamic source such as an orchestra.

audiokid Tue, 09/08/2015 - 20:25

Boswell, post: 431730, member: 29034 wrote: The old articles based on steady-state hearing models still make interesting reading. It's not until you extend the work to deal with transients that you realise that the human hearing mechanism is not a simple system that can't hear anything above a certain number of Hz.

From a 2012 post:

I remember that post, Bos (y)

x

User login