I recorded a string quartet concert that lasted just around an hour, .aiff files to a laptop. I want to burn the files to CD, but even after cutting much of the unwanted banter between tracks it's just over 900MB, too large for the standard CD-R. Honestly, I thought I recorded them at 24/96 but now I'm not sure. All those size tracks were much larger than that 900MB, and now I have them at 16/48 which puts them at that 900MB. I was thinking I'd be able to go to 44.1, but when I do that, it stretches each track out by a couple of seconds, completely changing the dynamics of the sound. Much slower and obviously unwanted.
I'm still in the learning process of all this, so I want to know if I am able to change the files from 48 to 44.1 without changing the 'sound' of each file, so hopefully it'll lower the overall project size so I can burn onto a CD.
I have Logic Pro but am still learning that program, so recorded using Audacity, if anyone could recommend ways to make this change while keeping the timing of each track the same. Does anyone suspect anything else about these file sizes and why they're taking up so much space?
Also, going forward, is it even possible to burn an hour, even a two hour concert of 24/96 files on consumer CD-Rs?
Thanks,
A
Comments
And I don't know if Audacity can do it. I didn't see an obvious
And I don't know if Audacity can do it. I didn't see an obvious way to do it in Audacity though I'd be a little surprised if it couldn't. Logic should be able to do it but I don't have that software. As indicated above it's most likely in the export/render/bounce options. You should be able to import your 16/48 stereo files and export them as 16/44.1.
I'm not familiar with audacity either, but one would assume that
I'm not familiar with audacity either, but one would assume that it would have the capability of actually re-sampling the file; either by saving it as an actual 44.1 file, exporting as such, or using a sample rate conversion plug of some kind.
Here's an instructional vid on how to accomplish this in Logic:
Regarding you first point about fitting your recordings on a CD,
Regarding you first point about fitting your recordings on a CD, I think you have got confused between the recordings as data files (e.g. .aiff or .wav) and the Red Book CD-DA format used for playable CDs. They are not interchangeable, although most DAWs and other burning programs can do some sort of a job in converting one to the other. The data files will usually be significantly bigger than the resulting CD images, in particular, 24-bit 44.1KHz data files will be at least 50% bigger due to the wordlength.
48KHz recordings have about 9% more data than a 44.1KHz version of the same performance, so a 700MB recording at the lower rate would come out at about 762MB at the higher rate, everything else (wordlength etc.) being the same.
In the version of Audacity I use, there is no direct way of down-converting 48KHz recordings to 44.1KHz to save as files. Any method of doing this uses a digital sample-rate converter (SRC), for which, as regular readers of these forums will know, I have a mistrust of their sonics. You may be able to use an Audacity Time Track and get acceptable conversion that way, but, in my experience, the results are not good.
I brought the tracks into Logic and bounced each one, converting
I brought the tracks into Logic and bounced each one, converting to 44.1 and was able to make the CD. The originals were recorded at 24/96 and I thought I converted them to 16/48 but were actually at 16/96. And yes, originally I was just changing the flag near the file for playback instead of an actual re-sampling.
Are consumer CD players able to read the files at 16/48 or do they need to be at 44.1, and what media are record companies using to release 24/96 tracks?
Lastly, imagine 44.1 Hz and 48 Hz? I think I was a few K short on that one ;)
Thanks for all of the ideas everyone and thanks for the video Donny. I'm going to checkout that series.
cheers,
A
An article from SOS explaining CD format types for anyone readin
An article from SOS explaining CD format types for anyone reading this thread and making discs
http://www.soundonsound.com/sos/jan98/articles/cdformats.htm
While I agree that resampling degrades the audio I feel it's a r
While I agree that resampling degrades the audio I feel it's a relatively (extremely, actually) small loss compared to many other factors. I wouldn't worry about it until you've really nailed other parts of the process.
I notice you went to 16/96 then to 44.1. It's considered best to apply dithering/truncating (down to 16 bit) as the very last process.
My pattern is to record at 24/48 and export pre-masters at that setting. When I master them for CD I'll do all the layout (timing, fades etc.) and audio processing (eq, limiting etc.) at that setting, then export the whole thing as one file at 24/44.1. Then I'll import that file (to different software) and add track markers, EAN, text etc., dither/truncate and generate a DDP 2.0 file ready for replication. If a video is being done I'll do a simpler process that omits much of the layout, the resampling and the DDP file so I end up with a 16/48 WAV file. For video 48k is the standard. For demos and more casual projects I am not as rigorous and nobody seems to notice, but then I mostly record rock.
Other forms of music may benefit from other ways of working, like 44.1 start to finish or "decoupled" mixdown. But at this point I would just concentrate on the basics.
Some of this is new to me and I've made notes on the vocabulary
Some of this is new to me and I've made notes on the vocabulary and methodology here. I did know 48k was standard for video audio given the 24 frame rate. Are you making music videos? In a somewhat related topic, what are you using to shoot video, and what are you using to record audio? I've considered the Fostex [URL="link removed[/URL] but the price has gone up a questionable $400.00 within the past year and there are still limitations although it's great quality for (DSLR) video audio.
Concentrating on the basics is always a great recommendation. Thanks.
Aaron, post: 435555, member: 48792 wrote: Some of this is new to
Aaron, post: 435555, member: 48792 wrote: Some of this is new to me and I've made notes on the vocabulary and methodology here. I did know 48k was standard for video audio given the 24 frame rate.
The 48kHz sample rate generally applies whether it's 24 fps film, 25 fps PAL video or 29.97 fps NTSC video.
Aaron, post: 435555, member: 48792 wrote: Are you making music videos? In a somewhat related topic, what are you using to shoot video, and what are you using to record audio? I've considered the Fostex www.bhphotovideo.com/c/search?atclk=Brand_Fostex&ci=14934&N=3992462091+4291439885 but the price has gone up a questionable $400.00 within the past year and there are still limitations although it's great quality for (DSLR) video audio.
I have mostly done live videos, everything from smart phone audio/video to multi-camera with multitrack sound. My YouTube channel is https://www.youtube.com/user/bouldersoundguy.
I record straight to the video camera (@ 48kHz) through an XLR i
I record straight to the video camera (@ 48kHz) through an XLR interface whenever I have a good shot at getting a good 2-track mix. It's not necessarily always in stereo, if I decide having two distinct mono audio tracks that I can mix and sweeten directly in the NLE is advantageous. If it has to be stereo, I'll still send the best stereo mix I can to a stationary camera that's usually doing a wide-shot. Putting a little effort into getting a good, live, realtime mix saves a ton of time later. Remixing the audio and syncing it to the video in post is always a possibility, but if it's a live performance I feel like it should be live audio/video, warts and all. It's not like most people would go back and redo a solo, and not have the video match up. Sometimes a little preparation can eliminate 2-3 very time-consuming steps in the process.
Even if I have to multi-track to refine the mix later, in addition to whatever I find to be the best recording medium(s) for the multi-track, I'll do as good a stereo mix as possible to the nearest camera. Then I'll let the other camera(s) capture crowd-noise via their on-board mics. Then fine-tune the multi-track mix export at 48k and drop it in the NLE, and mix in crowd-noise where appropriate. If you're like me and can't afford any kind of SMPTE or Genlock system, I would recommend that once you start the camera(s), don't stop until the end of the set, or the end of the entire performance. Even if you're walking to another location, and shooting footage of your shoes, keep the camera rolling. It saves a ton of time over manually syncing up a bunch of video fragments. I cannot stand out-of-sync audio/video, and on occasions when I've handed my audio to someone else who shot the video I've been disappointed that they didn't have the sync nailed down. The videos end up looking like they're lip-synced, or over-dubbed when I know it's a young 4-piece band 100% live.
'">In Memory of Elizabeth Reed - Video
FWIW
Nice work on the videos and thanks again for all your ideas. I'
Nice work on the videos and thanks again for all your ideas. I've posted a link to this before, but Plural Eyes is software that syncs video to externally recorded audio, by matching camera audio to external audio. I haven't used it yet, but have heard nothing but great things about it. https://www.redgiant.com/products/pluraleyes/
This seems like a good time to segue to something that I'll post as another topic soon, but maybe you all can send some initial recommendations.
I've linked up with members of a band who'd like to collaborate on recording studio audio and later create concert video, similar to what I've seen from your links.( 2-track limit so obviously not exquisite live multi-tracking, and all, so the concert audio would be questionable, unless I'm able to get a nice feed from the board and send it in-camera.) Anyway, they'd like to have me record individual electric instruments, of which I haven't done yet. With only the 2-tracks, what methods do you all recommend to separately record guitar, bass, and keys, for example. Depending on the quality of their amps, should I set mics to record from the amps, or would recording directly to computer sound better. I would assume the link would be instrument->D.I. box->audio interface->computer. Correct me if I'm wrong. So I want to know about D.I. boxes and varieties you would recommend my checking out. Active or passive, different price points, and recommendations for those various prices.
With all this being said, some eye and ear candy for the day ->
cheers
dvdhawk, post: 435561, member: 36047 wrote: I record straight to
dvdhawk, post: 435561, member: 36047 wrote: I record straight to the video camera (@ 48kHz) through an XLR interface whenever I have a good shot at getting a good 2-track mix. It's not necessarily always in stereo, if I decide having two distinct mono audio tracks that I can mix and sweeten directly in the NLE is advantageous. If it has to be stereo, I'll still send the best stereo mix I can to a stationary camera that's usually doing a wide-shot. Putting a little effort into getting a good, live, realtime mix saves a ton of time later. Remixing the audio and syncing it to the video in post is always a possibility, but if it's a live performance I feel like it should be live audio/video, warts and all. It's not like most people would go back and redo a solo, and not have the video match up. Sometimes a little preparation can eliminate 2-3 very time-consuming steps in the process.
When I'm the guy doing the house mix and sometimes operating a camera and there are amps and PA in the room I'm not really in any position to reliably generate a good 2-track mix. I've done it but it's not something I want to depend on.
dvdhawk, post: 435561, member: 36047 wrote: Even if I have to multi-track to refine the mix later, in addition to whatever I find to be the best recording medium(s) for the multi-track, I'll do as good a stereo mix as possible to the nearest camera. Then I'll let the other camera(s) capture crowd-noise via their on-board mics. Then fine-tune the multi-track mix export at 48k and drop it in the NLE, and mix in crowd-noise where appropriate. If you're like me and can't afford any kind of SMPTE or Genlock system, I would recommend that once you start the camera(s), don't stop until the end of the set, or the end of the entire performance. Even if you're walking to another location, and shooting footage of your shoes, keep the camera rolling. It saves a ton of time over manually syncing up a bunch of video fragments. I cannot stand out-of-sync audio/video, and on occasions when I've handed my audio to someone else who shot the video I've been disappointed that they didn't have the sync nailed down. The videos end up looking like they're lip-synced, or over-dubbed when I know it's a young 4-piece band 100% live.
'">In Memory of Elizabeth Reed - Video
FWIW
I agree about sync. The couple of times I've given my mixes to people making the video I've noticed less precise sync.
I've used SMPTE and RC Time Code in a linear video editing setup, but for live music and non-linear editing the audio tracks work fine. I've been manually syncing video so long I can't see why I'd bother with automated syncing. Even when I match the final audio to the camera audio I find I often have to nudge it slightly to look right, and I also find it's generally better to have the audio slightly behind the video. I think the brain expects a certain amount of acoustic delay when the source is perceived to be at a distance. In any case it's far better than the audio leading the video.
For sure, the environment and how many things one can realistica
For sure, the environment and how many things one can realistically multi-task are big factors.
Sometimes if the audio from an external source won't sync up perfectly, you have to go back to the DAW and trim a fraction of a second off the front of the audio track and re-lay it into the NLE. The NLE's resolution at 30 frames per second (or 29.97) is relatively coarse, compared to the DAW's 48,000 samples per second.
Last question first: No. 80 minutes is the absolute limit of a
Last question first: No. 80 minutes is the absolute limit of a standard audio CD. A consumer CD HAS to be formatted at 16-bit 44.1kHz if you want it to play in an audio CD player. Some audio CD players will play mp3 format files, many will not. And why degrade the audio?
There are enhanced audio format discs, but they're a bastardized DVD format to accommodate the larger file size. But again, not a standard 'consumer CD-R' that everyone will be able to play.
I'm not familiar enough with Audacity or Logic Pro to tell you exactly how to do it right, but it seems you are converting the sample-rate in the wrong place. It should not alter the length or sound of the music. In other programs it's either called "Bounce" or "Export" and it takes your DAW tracks and gives you the option to take your multi-track mix to stereo properly formatted for a typical consumer CD player. (2-track / 16-bit / 44.1kHz)
Hopefully someone else who uses Audacity or LogicPro can tell you exactly what to look for.