Skip to main content

just a quick question:

sampling rates - will a higher rate give you higher quality? Or is this just a measurement of how fast the interface syncs with the clock? I am a little confused about this.

Right now I am running my Firestudio project at 92kHz into my pc, but it can run at almost any sampling rate. Is running it this high necessary or should I tone it down to 48kHz or something?

Also, what is the deal with buffer speed? What is a good level to run this at? The FSP can go from like 100-5000 and Im really not sure what to use.

Comments

RemyRAD Thu, 01/15/2009 - 23:28

92kHz? I'm hoping those are typos?

44.1kHz/48kHz/88.2kHz/96kHz/192kHz are what you commonly find these days.

Your analogy is similar to analog tape speed. Faster is better. But not always the most practical. I'm an old school practical engineer. When I recorded on analog tape, professional, most would be 15 IPS. If we wanted a greater more urgent sounding presence? Budget permitting, we'd go 30 IPS. Because they sound different, we would make an educated decision to choose what we thought put our best foot forward? But when you all sound know how to digital works and how it is generally going to be delivered? Sometimes it more practical to work within those constraints, 16-bit, 44.1kHz, which is required for CD release & most MP3 downloads.

Because of that it really comes down to your choice of microphones, preamps, mixers, EQ, your software, your plug-in's, your engineering chops. So, you choose the sound. Not the specs.

Maybe one day I'll be able to counter a tenor?
Ms. Remy Ann David

BobRogers Fri, 01/16/2009 - 05:11

Another way to say the same thing as Remy: Think of a the graph of a sound wave as a function of time. Voltage (or sound pressure or something related) on the vertical axis as a function of time on the horizontal axis. You digitize that two-dimensional picture by chopping it up and converting the chopped up bits to ones and zeros. The sample rate determines how fine you chop up the horizontal axis. The larger the rate, the finer you chop it up. The bit depth (16 vs. 24) determines how fine you chop up the vertical axis. The bigger the number, the finer you chop it up.

The finer you chop, the better the digital representation of the analog signal in terms of sound quality. However, the finer you chop, the more data your computer has to handle and the more data you have to store.

I like to record everything at 24bits since that record at a lower volume (and give myself more headroom) with the same or better quality than 16 bits. I record at 44.1 kHz for pop and jazz when I am recording a lot of tracks and usually playing myself. I don't have to worry about overtaxing the cpu. Everything runs smoothly. I record at 88.2 kHz if I am recording a few tracks (e.g singer/songwriter or classical) and I am not doing any playing so I can deal with any computer problems that come up.
Of course, all of this gets bounced down to 16bit 44.1 kHz when put on a CD.

You should try to do some tests to see how big the quality difference is between the various sample rates. I've found the differences to be very small after they are bounced down to a CD. I recorded a demo of an electric keyboard so I'd have something reproducible. (This is not the best test since my "analog" source is converted from a digital sample. If you have something truly analog like a music box, that would be better.)

Codemonkey Fri, 01/16/2009 - 19:52

music293...

BobRogers wrote "However, the finer you chop, the more data your computer has to handle and the more data you have to store."

That's the reason you don't *necessarily* want to.
To save space, conversion hassles and system load.

Go on, you make a Pentium 4 run 8 tracks at 96KHz.
Add a plugin or six. Try twelve? CPU melted yet?
Now make it 44KHz. Boom, 50% load reduction.

Cucco Fri, 01/16/2009 - 19:56

music293 wrote: I might be wrong here, but wouldn't it be best to record at the highest sample rate that your interface allows for, no matter what format you intend your tracks to wind up as?

No - not at all.

Sometimes, it's not just about what you can do but why you're doing it. If you're constantly recording at 192kHz/32 bit, then you're going to be burning up hard drives REALLY quickly.

Also, while there may be a mild difference in quality from going up in resolution (even after dithering/SRC down to 16/44.1kHz), the difference is negligble at best and moving a single mic 1" will make a bigger difference any day of the week.

I would also submit that, not all music is benefitted from higher resolutions. Not to say that you could just go lowering it willy-nilly, but certainly 16/44.1 would suffice.

I'd say it's a balance between the mild improvement versus the extra processing time and waste of resources.

Granted, for higher-end clients in great venues, I'll crank up the DSD or 192kHz rigs, but for 90% or more of my clients, it's 88.2 or 44.1.

Just some thoughts -
J

BobRogers Sat, 01/17/2009 - 06:12

And of course Murphy's law says that your CPU will always overload in the middle of the best performance of your most important client. And it's not Murphy but physics that says that your CPU will overload when you are recording the maximum number of tracks at once - wasting the time of the maximum number of people.

Of course, a big point of expensive systems with lots of processors is that it allows you to take the "highest rate available" approach. You just need to weigh costs and benefits.