Hey guys, I'm wondering if multiple saving of my mixes as a .wav file results in any quality reduction. I'm talking about minor changes like opening the file in an audio software (Adobe Audition 3), cutting the track's long start and then saving the changes.
The file isn't encoded over and over again this way, right? So will I lose any (even slightly audible) quality with this practice?
I export my mixes as 16bit, 44khz WAVE files, without dithering.
Sorry for my rather lame question and thanks for answers!
Comments
Well, maybe I should add that I use Audition only for post-produ
Well, maybe I should add that I use Audition only for post-production. So, in fact for "creating" the songs I use Reason, save the mixdown as a WAVE file and only THEN I open the file in Audition and do the final tweaking, like adding a small amount of limiting for little more volume, normalizing, etc. and I save the file (1st edit). Oftentimes, after this, I find out that the song's start isn't right, so I open the file again in Audition, cut the start and then save the file again (2nd edit to a file).
I'm working with a .wav file all along. This is the multiple saving I'm asking about. If that could possibly reduce the quality or not...
When you open a file in any DAW, it is not changed in any way ju
When you open a file in any DAW, it is not changed in any way just by opening it up. If you trim time off of the beginning and end you still are not changing anything in between-just shortening the file so that doesn't alter the original desired audio either. Opening and trimming a little more off still isn't changing any of the music/speech whatever.
Now, if you do processing of the track, like using a compressor/limiter or some other plugin, that will alter the original track. Provided you use quality plugins this processing should not be undesirable. If the plugins are low quality then it is possible to degrade the track but again you can always hit control Z and undo your processing. The editing does not become destructive (saved into the file) until you save the file after processing.
Always make a copy of the audio file you wish to work on no matter how good of quality your gear and programs are.
Ok, I was only wondering if the actual multiple re-saving of the
Ok, I was only wondering if the actual multiple re-saving of the file itself degrades the quality because of some sort of encoding- I came to this question because of my experience with editing and re-saving an mp3 file- the more you re-save the file, the less clear the sound is. I presume that this is entirely a matter of the encoding being responsible for reduced quality and thus this is not going to happen when I re-save a wave file... Does this make any sense?
lukas - are you saving the .wav file back to the same name each
lukas - are you saving the .wav file back to the same name each time, thus overwriting the previous file? If so, and you are dithering to 16 bits every time, you are certainly losing quality on each edit.
What you should do is save the 24-bit .wav from Reason, and not subsequently overwrite it. When you need to make edits in Audition, read in the original file, do the edit and then render the result as a 16-bit dithered .wav file under a different name. If you think you may want to perform further edits and are not sure you can reproduce the previous ones, save each edit as a 24-bit fixed-point or 32-bit floating point version of the project in Audition, and start from there for the next edit. Only after you are happy with the final result do you delete the intermediate files, keeping the raw Reason output.
The basic rule is: don't edit a rendered result.
Hi, sorry I'm a bit late on the matter... So, a .wav file is an
Hi,
sorry I'm a bit late on the matter...
So, a .wav file is an uncompressed waveform description of the signal. It means it's a long sequence of n-bits "junks", where "n" depends on the bit-depth with which it was originally coded. The ".wav" format includes a header (non-audio part) where the signal format is described, together with optional info I personally suggest not to add. IF you read/write/read/write... it n times always with the exact same format (bit depth, sampling frequency, and first-bit interpretation, i.e. LSB or MSB), you will NOT add any artifact except normal random read/write errors of the digital equipment (the computer), which are unavoidable but also inaudible (in general...).
IF, in another hand, you resample it in any way, then YES, you will alter the signal. Note that this may be useful in some cases, for example if you apply kinds of FFT-based filtering and you want it done with a higher bit depth. But your signal will never be the same again, even if you don't "touch" anything between reading and writing.
Note that cutting the beginning or the end of a ".wav" file simply takes out some bits' sequences, that's all. So the signal in itself is not altered in any way. Remember the theory of errors: you will likely have randomic errors during the read/write process.
Best of all is operating with pure-PCM waveform files (the extensions may vary, ".pcm" is common but ".cda" may be another one), but you must tell yourself to your DAW which signal format it is.
Hope it can be useful...
Regards!
Thanks for your detailed description, cloche! That all makes sen
Thanks for your detailed description, cloche! That all makes sense.
Now I know that I should post a new thread but maybe you would know something about this too:
I've been obviously preparing the songs for my indie album for some time now and would like to deliver the finished album at some decent quality, so I've done the mastering to an extent I am able to with my current skills, I limited the tracks to 0db ceiling, normalized them, etc.
So last week I've been checking the overall sound and its translation on various systems, such as home stereo, car cd player, small players or even laptop and iPod. I'm quite satisfied with the results and noticed no significant problems but one thing keeps crossing my mind:
When I boosted the high frequencies on Windows media player's EQ or used the "boost highs" on iPod EQ, two tracks have had this kind of "essing" at certain parts of the vocals, even when I used de-essing back then when I was mixing. A bit more fancy equalizers, such as on the stereo or car system don't do this.
Do you think that this is something I should be worried about? I don't think I could bring the tracks' volume down more because of today's volume standards (even though they're not screaming crazy now).
What do you think?
If you're boosting stuff on the listening system, I would be too
If you're boosting stuff on the listening system, I would be too concerned.
Why would you do that anyways? I like to listen to my systems flat, unless something needs a little help.
Regarding the actual issue:
You need to learn how to use a limiter properly, first.
I'll admit, I don't know how to use one proper myself.
Mastering and limiting to a 0dB ceiling is generally not a a great idea.
Especially if you're using cheaper software/plug-ins to do the job - that's probably where your "ess" artifacts are coming from.
If you really want good, mastered tracks...
I strongly suggest you lay of the limiter (just kiss it) and send your lower level mixes to someone who gets paid to master.
I've done my own "mastering" w/ pretty decent software, and I'll tell you there's no comparison.
Do your job, let them do theirs.
Personally I never really use the normalize function at all. I k
Personally I never really use the normalize function at all. I know some really good studio techs that use normalize on individual tracks at mixing time. And I also notice most people I have seen use normalize are not into tweaking compressing mixed w/ limiters. But I suggest to get rid of the habit to normalize and experiment w/ your ears using compression tools to create the same desired or more desirable dynamic balance. This might open a can of worms but you get in control of your dynamic range and its very liberating. Also, Soapfloats is right on the money about the "ess" sounds being overloads caused by the 0.0DB ceiling, and instead use a ceiling of -0.2 or -0.3 on the limiter. The problem is that cheaper grade converters need all the soaking headroom to not create errors(overs). Also as you approach the ceiling there are artifacts from the original that you don't notice until mastering is done on the track. Thus making minor sounds jump out in volume more easily. Also my personal belief about .wav files is that if the engineer is using bad/hacked software, there can be corrupted files in the end. Meaning that as someone uses pirate software there can be less quality in the .wav files and more errors will be found in the rendered files that are processed. So be careful who you get software from these days. And even the best software that is legit has errors and even worse creating corrupted files.... Yay! fun
Belated afterthought:
Also, Normalizing to 0.0db might also be the problem here too...
cloche, post: 345633 wrote: Hi, sorry I'm a bit late on the matt
cloche, post: 345633 wrote: Hi,
sorry I'm a bit late on the matter...
So, a .wav file is an uncompressed waveform description of the signal. It means it's a long sequence of n-bits "junks", where "n" depends on the bit-depth with which it was originally coded. The ".wav" format includes a header (non-audio part) where the signal format is described, together with optional info I personally suggest not to add. IF you read/write/read/write... it n times always with the exact same format (bit depth, sampling frequency, and first-bit interpretation, i.e. LSB or MSB), you will NOT add any artifact except normal random read/write errors of the digital equipment (the computer), which are unavoidable but also inaudible (in general...).
IF, in another hand, you resample it in any way, then YES, you will alter the signal. Note that this may be useful in some cases, for example if you apply kinds of FFT-based filtering and you want it done with a higher bit depth. But your signal will never be the same again, even if you don't "touch" anything between reading and writing.
Note that cutting the beginning or the end of a ".wav" file simply takes out some bits' sequences, that's all. So the signal in itself is not altered in any way. Remember the theory of errors: you will likely have randomic errors during the read/write process.
Best of all is operating with pure-PCM waveform files (the extensions may vary, ".pcm" is common but ".cda" may be another one), but you must tell yourself to your DAW which signal format it is.
Hope it can be useful...
Regards!
Some points:
(1) Writing back to an audio file using dither will always change the bit patterns in the file, degrading the audio, even if you have made no specific edits.
(2) Specific edits such as applying a fade-in/fade-out can cause re-computation of the whole file when rendering. If this is applied to a previously-rendered file, the audio will degrade.
(3) If there are "normal random read/write errors of the digital equipment", your equipment is faulty.
some indices of saturation... Hi, if you have normalized and, b
some indices of saturation...
Hi,
if you have normalized and, before this, mixed to 0 dB, then applying equalization have saturated the tracks you say.
Evolved programs / appliances allow to lower the dry signal before sending it to the equalization stage, but doing so AFTER your mastering obviously ruins all the attention you have paid up to then in order to have a very clean signal...
Regards
@ Boswell: I'm sorry, I don't agree at all: 3) random in/out err
@ Boswell:
I'm sorry, I don't agree at all:
3) random in/out errors are NOT index of faulty hardware: they are part of the Theory of Errors. Sorry. It's a technical / electronical matter of fact. There exist hardware of such low quality that in/out errors are the routine, not randomic.
1) I spoke about simple retranscription of a waveform. If the program you use applies dithering without you wanting it, then I suggest you change your program.
2) Never said anything opposite. Fade does apply a modification in the waveform and hence a rendering. Simple truncation of part(s) of the original waveform don't cause any rendering, at least if your program is at least decent.
Regards
:smile:... an afterthought: The "problem" with hadling an audio
:smile:... an afterthought:
The "problem" with hadling an audio signal in the informatic world (computers and more in general appliances made of hardware / software combination) is that, back in the time when standards were defined (the Green, Orange and Red Books for audio-CD), the hardware was only able to deal with 2-channels 16-bits depth and 44.1 kHz sampling (well, not really, professional converters being able to do oversampling, but I simplify a little). That became the standard of the Red Book and all what was called "Hi-Fi" was based upon it, together with the algorithm used to sample, the Pulse-Code Modulation schema with Reed-Solomon error-correction code. You may note that there is not any mention to MSB or LSB, because this problem came after: when computer-based audio processing raised in importance: if I remember well, the standard in Wintel world is LSB while MAC world is standardized on MSB (it might be the opposite, the important thing is that one is the opposite of the other :smile: !).
It has to be remembered that Reed-Solomon algorithm for error-correction can only correct errors of 1 (one) BIT per sample per channel, i.e. if more than 2/44100=4.53e-5 errors/second occur, then the signal will stay (partially-) corrupted. In order to avoid this kind of corrupting, in informatics the data is greatly redundant. This doesn't happen in pure-audio. Pure-audio signal is extremely sensitive to random errors. You can have an idea of how much, with respect to informatics world, by knowing that a PCM CD-audio "sector" is big 2048 bytes, whether the exact same amount of useful bits is coded in a data-CD as a "sector" of 2356 bytes. If it is a mere matter of data preservation, it's better to save a project in any sort of "data-form" instead of transcripting it under any form corresponding to the Red Book.
Nowadays, the birth of new standards has remastered all the matter, but more or less the concepts above tend to keep valid. As long as you are working on a project, you should stick on the bit-depth and on the sampling frequency you chose AT START. All your chain (recording / editing) must stick on the exact same format.
Generally, the engines of all nowadays DAWs are much more powerful than any audio-reproduction device: many are able to deal with 64-bits depth and unlimited sampling frequency (well, the limit does exist: the power of the underlying computer to generate the samples in real-time...): none of these two characteristics are possible with any D/A or A/D converters. In addition, their file-format does is redundant to some extent, in addition to the computer's filesystem (especially NTFS) being redundant on its own.
Hope I wasn't too boring...
Regards
I've not really messed with Audition much, but here's my underst
I've not really messed with Audition much, but here's my understanding of this sort of thing, with any DAW...When you make edits in, whether it be cutting something out, changing levels, adding effects, etc., you are not changing the source recording in any way. You are only telling the computer what you would like IT to do to the source recording. This is why we have the joy of "non-destructive" editing. So then ultimately, when you render the finished recording, the computer is applying all of the commands you gave it to the source recording (usually a wav file), and giving you a stereo wav mixdown file. The original recording is still left untouched. Does this make sense? Maybe those with bigger brains and better explanatory skills can jump in...Andy