Skip to main content

Are VSTs within a DAW subject to any distortion while their digital audio is rendered in any way?

I know some information of Analog-to-Digital Conversion and while still learning of such, I am also now wondering if there are any distortion matters in a digital-only realm.

Conceptually, digital audio is composed of 0s and 1s, and the orders of the two. I don't know how a VST within a DAW exactly expresses those 0s and 1s, and I don't know if they are set and equal each time or if they are met with any affect on their state. If they are not met with any such distortion, then is such to say each render of the mix or master will be equal, their 1s and 0s unchanged each render? With the audio originating from the VSTs, whether from samples or synthesis, or any effects of such, might there be any potential of digital distortion between the sound origin and the rendered files?

When a VST is synthesis or effect, does it compute its digital sound based on isolated software algorithm, or does its algorithm anywhere become affected by hardware in expressing its sound?

In matters of samples, I might think those samples are set to their 0 and 1 identity, and any render of such will not alter them.

Synthesis and effects on the other hand, with their nature of not yet being audio files, are in a state of in-between inside the DAW, having waveform and waveform alteration algorithms. Once they leave this state they will be essentially a sample.

Thus, the digital process of VST to render is in conjoining the individual digital sound wave algorithms and files into a one-file sound state, and is in this concept, a conversion. In fact, a conversion of conversions; if wavefore algorithms are taking place they will have a sense of conversion as they effect through their parameteric algorithms to create a sound (synthesis) or to respond to and alter a stimulus of either synthesis and/or sample and/or other alterations (effect).
I might think these conversions could be disturbed along their path, but I might think not.

The Render shall have Sample Rate, Sample Format (Bit Depth) and Channel Cout. The render will be of in music production usually 44.1khz 16bit Stereo. So, when and how are non-sample digital sounds converted to a file of such formats? Is it isolated software algorithm unaffected by hardware, or is it affected by hardware?

Comments

audiokid Tue, 12/20/2016 - 23:46

Holy doodle that was quite a question.
I don't know where to start.

Student, post: 445916, member: 50039 wrote: Are VSTs within a DAW subject to any distortion while their digital audio is rendered in any way?

  1. I think its safe to say that its all fine and dandy until you start changing sample rates.
  2. Not all DAW's or VST's get the math right in a bounce from example 96k /24> 44.1/16.
  3. Some VST are coded bad
    But is that what you asked?

    read this 5 times and say it again. http://recording.org/threads/clock-jitter.45012/.
    Boswell MrEase what say you?

Brother Junk Wed, 12/21/2016 - 07:47

Student, post: 445916, member: 50039 wrote: Are VSTs within a DAW subject to any distortion while their digital audio is rendered in any way?

If you are talking about a virtual instrument only (thats what some people mean when they say VST) it would be dependent on the instrument. E.g. Vienna Symphonic Library is amazing, but it's fairly basic. If you want reverb it has to come from a diff plug in (pi), if you want excitation, it's another plug in. If your question is, "Is the virtual instrument file subject to change when being rendered with the pi chain?" The answer would be yes bc it's creating a new file. But you are using the word "distortion" and I'm not positive what your definition for that is here. What I believe happens is just the rendering of a new file, a new code. And bc of the pi, there may be small errors in there, but it shouldn't be anything audible or noteworthy. I wonder if I can check that in a couple files somehow...(to see the coding)

But with Kontakt, a lot of the pi's we would use, reverb, chorus, delay, whatever...it's inside the virtual instrument. So you can play/record the file just once if you want to. No pi chain, no extra rendering. There is no pi chain (unless you make one after). So less opportunities for errors.

I can tell you one thing about this part and I know because I had to do it by hand way back when. This is educational...I'm not sure exactly how it works in your (or any) daw/DA.

Student, post: 445916, member: 50039 wrote: Conceptually, digital audio is composed of 0s and 1s, and the orders of the two.

Only if it's binary. And maybe it is in prime (binary) form, in your daw, my daw, everyone's daw....I don't know, I've not looked into the coding. But we can't assume it's binary. It could be base-3, base-4, octo-decimal, hexa-decimal. 0's and 1's are inefficient. Those are what are considered prime in digital....in other words, we can break every other format down mathematically, to get to binary, but hexi is way faster encoding and decoding as it uses 0-9 and A-F. Think of it sort of like packet sizes. A 16 digit encoding can tell you far more info per character than 2 digit. So, if you were able, to look at your audio, digitally, it might say 1F8D3C224A4B90, which you could break down into 0's and 1's....but it would be a long chain. Binary code means 0 and 1...."Digital" does not. It's kind of like saying American currency is the penny. Every domination can be broken down into a penny....that's the basest form. Everything is built around how many pennies it =. But we have $1, $5, $10, $20, $50, $100. Those can be interpreted as number of pennies, but we don't need to be that simple anymore, or grossly inefficient, carrying all those pennies around with us. We can use nickels, dimes, quarters, all the bills. Sorta make sense? I'm explaining this bc maybe it may help you figure out what you're trying to figure out. Conceptually, which is what you stated, you are absolutely correct. That's how it started. But I haven't seen it expressed that way, ever actually, and I started it in '98.

Student, post: 445916, member: 50039 wrote: I don't know how a VST within a DAW exactly expresses those 0s and 1s

If it's binary, you have two options, 0 or 1.

Student, post: 445916, member: 50039 wrote: and I don't know if they are set and equal each time or if they are met with any affect on their state.

You will want to check everything I say, bc Im coming at this from a different background than most here. My knowledge might be misapplied. But it should be equal time unless it's a floating/variable bit depth. The general consensus here (and I agree with now) is that the less sample rate conversion, the better. That process is what leaves the most room for the coding errors you are talking about. I don't use Logic a lot, but in Logic, every now and then, I have an issue where files start to sound like they have a tremolo effect on them. Almost like a fluttering type sound. And I realized that the sessions it's happened with are the ones that I have changed the sample rate. I didn't change it intentionally, Logic will work w/o an outboard processor, so it's an easy laptop composition daw. But when I took it to the studio, it would go from 24/48 to 24/96...and then I could come home, work some more, but it would be back to 24/48. It was only those sessions that I had a problem with. And to be fair, I didn't know that was the problem until I came here. So, I'm fairly certain that audiokid , Boswell etc have it right. Just don't be changing your SRC around a lot and you should be fine. I haven't had that problem with Logic since.

Student, post: 445916, member: 50039 wrote: When a VST is synthesis or effect, does it compute its digital sound based on isolated software algorithm, or does its algorithm anywhere become affected by hardware in expressing its sound?

If a hw chain is used, hw will affect it. But I don't think I know what you mean.

Student, post: 445916, member: 50039 wrote: In matters of samples, I might think those samples are set to their 0 and 1 identity, and any render of such will not alter them.

I think maybe I'm not understanding you, thus I'll leave it here. By "rendering" I'm assuming you mean you have a piano track...you take a pi and add reverb. You bus it to an armed track, hit record, and the new piano track + the reverb plug in, will now be recorded as a sum. You could then delete the original piano track if you wanted to, which would clear the pi and give you more processing power. That's what I've always understood rendering to mean. But if you did what I just said, you end up with a new file. The sum of the algorithm making the sound and the reverb algorithm. Essentially resampling it, giving it new information and values. This paragraph above is where I'm furthest out on the logic branch...but that is what would make sense to me. After rendering, the file simply could not be the same, or it would sound the same. But it doesn't. The rendered file will have both the synthesized piano and reverb, and a sampling of that combination will now be taken, and values distributed....I think lol.

Student, post: 445916, member: 50039 wrote: So, when and how are non-sample digital sounds converted to a file of such formats?

Not sure if I understand. I think the answer to what you are asking is "after you hit record." If it's created a clip, sound, whatever your daw registers it as (you see a waveform) it's done. If you can export it (which you can usually do after recording) than it has the info you are asking about.

Most of the above is how I think it would logically work, based on the knowledge I have. I'm not saying all the above is, for certain, how it works. I'm telling you upfront, I'm not positive. But maybe what I know, will be enough to fill in what you want to know.

The only thing I'm positive of is the Binary/Digital distinction I presented. They aren't the same thing. Morse code is technically more complicated than binary (ternary). Three values, a dot, a dash, and a space...and the combinations thereof. But it's not "digital" in the way we are talking about. I had to learn how to convert binary, base-3, base-4, octo, and hexi for my job at Verizon, but it was simply academic. I never actually saw anything written in binary, and like I said, that was in '98.

When did I get so old?

pcrecord Sun, 12/25/2016 - 12:34

Student, post: 445916, member: 50039 wrote: Are VSTs within a DAW subject to any distortion while their digital audio is rendered in any way?

Short answer is YES !!
I should add, anytime you process an audio file, what ever the process be the audio will be altered.
Will it be altered in seemless fashion or not depends on the code of the software.
Most plugins and softwares are designed to operate within certain boundaries and some will produce noticeable artifacts approaching those boundaries and some only when crossing them.
For exemple, the majority of plugins are designed to receive around -18 to -10db of input signal. It's not that they will fail otherwise but they will sound better with those levels of input.
It is particulary true of emulation plugins. Send a too hot signal to a pultec EQ simulation and you'll get different results...

My conclusion is Yes, the more you manipulate the audio, more degradation you get..
This is why better tracking audio path combine with minimalist mixe processes is often the best route to success.

Brother Junk Sun, 12/25/2016 - 15:34

pcrecord, post: 446011, member: 46460 wrote: Short answer is YES !!
I should add, anytime you process an audio file, what ever the process be the audio will be altered.
Will it be altered in seemless fashion or not depends on the code of the software.

He said it much more concisely than I did. I think the word getting in the way of some clarity here is "distortion" vs "altered." The file would be altered (it has to be), but, distorted? I guess that would depend on your def or distortion.

I'll try to remember to post at least one of those files tomorrow in this thread so you can see what I consider a "distorted" render.