ok, so I've always coverted my sh*t to constant bitrate 320kbps 44khz
variable just didn't sound like it would be better because I associate the word with being inconsitent.
but I decided to look into it today for the hell of it and read that variable is actualy better quality and only choose constant bitrate if you want to keep the file size smaller. .
so I tried both with the same song, and I think that the variable does appear to sound better. but when I compared the file sizes the variable mp3 was smaller?. . how is this so?..smaller leads me to believe it can't be higher quality. .
if anyone could let me know what's up here I'd appriciate it
Comments
okay so variable is the way to go then if I get better quality a
okay so variable is the way to go then if I get better quality and apparently even a lower file size to boot.......thank you very much for help Mises!
I guess it just also confused me to see that the variable mp3 was smaller because I had just finished reading that they are bigger in file size then constant mp3s and to go only go with constant if u want a smaller file size
sampsoniter wrote: okay so variable is the way to go.......thank
sampsoniter wrote: okay so variable is the way to go.......thank you very much for help Mises!
I guess it just also confused me to see that the variable mp3 was smaller because I had just finished reading that they are bigger in file size then constant mp3s and to go only go with constant if u want a smaller file size
Just to make two comments.
First. One problem I believe with variable bitrates is that some audio players have trouble "fast forwarding" and "rewinding" when your playing a variable bitrate song. I am not sure if all players do this... but I know many players have this problem because variable bitrates are a more complicated thing and I think its difficult to computer program a fast forward function for variable bitrate files.
Secondly, a variable bitrate file could actually be larger than a fixed birate file... It kind of depends. It really is astoundingly technical and depends on a lot of different things.
There is a difference between "theoretical" versus how it gets implemented in an actual algorithm. What I have told you is that in general variable bitrate codes are better. Thats theoretically though. Sometimes when algorithm designers design stuff... the end product deviates from "theoretical" to some extent. I have not studied MP3 in sufficient detail (though I'm pretty familiar with how it works) to make broad sweeping statements about it... but I would think that variable bitrates files should be smaller and produce slightly better audio, on average. But it all depends on how the designers of MP3 decided to implement their version of variable bitrate. Theoretically, its supposed to be superiror however... "by the book".
i don't understand the previous answer but my understanding is t
i don't understand the previous answer but my understanding is that variable bitrate adjusts the space used based on the complexity of the music. so it'll bump up the encoding for stuff that needs it and take it easy on the memory storage when that will result in just as good sound. hence better quality for less memory used.
this sounds impossible and highly unlikely to me but so does this computer im typing on.
i use aac 192 vbr to rip cds but i'd go with 220 if i could. i started at 320 constant but it sounds pretty darn good to me.
mugtastic wrote: i don't understand the previous answer but my u
mugtastic wrote: i don't understand the previous answer but my understanding is that variable bitrate adjusts the space used based on the complexity of the music.
I haven't slept in like two days... so my previous answer probably wasnt particularly helpfu. Now that I reread it, it could have been more to the point.
Mises, I really read with great interest, your observations. An
Mises, I really read with great interest, your observations. And so, all very good stuff.
What was not really mentioned here, is that the 128, 162, 190, 220, 320, etc., bit rates, really determine the actual high frequency response capabilities. For instance, 128 kilobits per second gets you a high frequency response to 15kHz. 320 kilobits per second, gets you high frequencies response to a little better than 20kHz.
So, tell me how much you really think you need beyond 15kHz? Of course, you want the ultimate fidelity but what is practical? 16 bits gets you 96 DB dynamic range. You need 140 DB of dynamic range? Right, of course not. All your stuff is already highly dynamically compressed and you have no dynamic range and so, who needs 24-bit MP3?? Right. After all this is a lossy compression scheme to begin with. Something meant for easy transport over the Internet.
One of the reasons why 128kb/s is popular, is because, that's the response of both television and FM radio. No need to go any further. Besides, most folks over 40 don't hear much beyond 15kHz anyhow.
In a situation such as a live radio broadcast that relies on ISDN telephone lines and a real-time MP3 coder, again, not usually run beyond 128kb/s, constant bit rate since they need the lowest latency and real-time. So really, the variable bit rate is only preferable when a computer is utilized, since it gives your computer the time to do better math, look back and look forward and so come the advantages, which really doesn't work in real-time. So since you're not doing a live broadcast, you really can't do badly with 128kb/s, variable bit rate. And you really should not notice much of a difference between your master and the MP3, unless there are other problems with your mixing or signal chain. It will certainly not have the "air" that your master has and you may hear some artifacting, in the form of "underwater flanging"? It certainly serves a purpose. Depending on the purpose, I have delivered MP3 files from 16-bit, 11kHz mono sampling to 16-bit, 48kHz stereo sampling. Largely depends on whether it's music, spoken word, archival purposes.
Serving up good 128kbls MP3's
Ms. Remy Ann David
RemyRAD wrote: Mises, I really read with great interest, your ob
RemyRAD wrote: Mises, I really read with great interest, your observations. And so, all very good stuff.
What was not really mentioned here, is that the 128, 162, 190, 220, 320, etc., bit rates, really determine the actual high frequency response capabilities. For instance, 128 kilobits per second gets you a high frequency response to 15kHz. 320 kilobits per second, gets you high frequencies response to a little better than 20kHz.
So, tell me how much you really think you need beyond 15kHz?
Serving up good 128kbls MP3's
Ms. Remy Ann David
192k sampling is almost certainly pointless... I'll pretty much come down firmly on that side of the debate. 96k pointless? Thats kind of where I draw the line.
I find myself surprised at how widely accepted recording at at 44.1kHz is among professionals, given a couple of recent surveys I have seen on online forums.
That kind of shocked me because I had a preconceived notion that most people today were recording at 96, or at least 48kHz (though theres the old debate of whether 48 doesnt end up being worse than 44.1 when you do the sample rate conversion).
Its not so much that we can or should reasonably expect any adult to even be hearing anywhere near that range (i.e. over 15kHz) so much as its clear that technology is still kind of "primitive" at this point (even though we think its super advanced).
Its just as true today as it was 10 or 20 years ago, that the filters we use in A/D's (to eliminate all sounds outside the audio passband) are not perfect and so, thats why they started creating 48 and then 96 sampling rates to begin with... to relax the filer requirements so that the signal in the passband would be cleaner and wouldn't suffer from artifacts from steep roll off filters. So, I think 96 kHz sampling is the way to go for the time being until filter technology advances for another 20 or 30 years.
Did I just digress into another topic??... Hmmm .. yes.... I think I did.
sampsoniter wrote: ok , so I've always coverted my shit to const
There is only a very loose correlation between file size and quality of audio.
Variable bitrate schemes were invented decades ago by source coding theorists (which is part of the field of information theory, and if your really interested in it, you can read some great articles on SiteSeer which is a repository for published papers.)
The benefits of variable bitrate have been widely implemented as formalized standards, as well as many a variable-rate schemes proposed by enthusiastic grad students which were never formally adopted as standards, and this methodology is used in many fields relating to video and audio including telecommunications, though the exact implementation is a little different in each.
As a general rule, variable bitrates almost always lead to a higher fidelity and/or a lower average bitrate in comparison to a fixed rate coder.
Why then is fixed codelength schemes, like PCM, so popular to this day even though it was invented about 70 years ago by Reeves, and is arguably primitive by todays standards and level of research in the information theory field? The answer is simple and dissapointing. Practicality and simplicity, as well as predictable and uniform performance which can be predicted fairly accurately with simple mathematical equations which have a lot to do with how standards are adopted.
Fixed bitrates which use a memoryless encoding scheme (PCM) are an inherently inferior process, as is uniform quantization which on average produces a lower fidelity for a given wordlength size.
The best quantization scheme is one in which the sources statistical characteristics are matched to the quantization bins to minimize the rate with respect to distortion criteria.
Furthermore, an even better quantizer (is one in which though it is not practical for real time conversion, but is useful for non real time systems or where a small latency is acceptable), is one in which the quantizer utilizes the property of memory. In otherwords, where the choice of output codewords is dependant not just on current input conditions, but upon past conditions which are controlled by some finite state type process. The longer the block length, the better (lower distortion while requiring less bits) thouygh at the expense of delay. Channel coding uses identical principle where the constraint length of the encoder yields better performance and likewise with the drawback being complexity and delay as a trade off fior performance.