Skip to main content

I just produced a new version of the decoder. It can do some amazing things, needed because lots of material (mostly before 1990s) has been distributed with DolbyA encoding still intact. That leaves a harsh, compressed HF, sheen, overly crisp voices -- kind of sound on the recording. My decoder, does a very credible job of a true correction, and is being used professionally as a DolbyA decoder. It was compared with a true DolbyA unit (actually several of them) and another software decoder. This software came out significantly closer to the results of a true DolbyA decode than the other SW.

The program is not a trivial little toy, but does lots of work to both clean up the audio and to clean up the intermodulation and other evils that occur oh so easily when doing nonlinear processing on audio. I have posted the program da-win-06may2018A.zip, a file containing some hints: DecoderA.pdf, and some EYE-OPENING example recordings. Repo location: https://spaces.high…

The decoder is free, has no timeouts or anything like that. It works only on recent Intel or AMD machines -- the da-win version will work on newer ATOMS like the Silvermont (and will work on the newest CPUS but doesn't take advantage of them.) the da-avx version takes better advantage of the more advanced vector operations. 95% of the CPU usage on this program is vector math -- fairly advanced code. It is MUCH more processing than a simple DolbyA emulator because it has to sound GOOD. Simple implementations will have so much intermod, people will say things like: there is that harsh computer processor sound again. This code is less distorting by far than even a real DolbyA.

Tags

Comments

John S Dyson Sun, 05/06/2018 - 21:20

John S Dyson, post: 456872, member: 50354 wrote: I just produced a new version of the decoder. It can do some amazing things, needed because lots of material (mostly before 1990s) has been distributed with DolbyA encoding still intact. That leaves a harsh, compressed HF, sheen, overly crisp voices -- kind of sound on the recording. My decoder, does a very credible job of a true correction, and is being used professionally as a DolbyA decoder. It was compared with a true DolbyA unit (actually several of them) and another software decoder. This software came out significantly closer to the results of a true DolbyA decode than the other SW.

I need to update the comment to explain something about the sound quality. You'll notice that the sound is much more dead -- but the highs are really there -- just that I did NOT EQ the material. After decoding, the result is essentially a 2trk master -- and it hasn't been 'brightened up'. If you give a little bit of treble -- say 1-3dB at 6kHz, and maybe a bit more here or there, the sound will brighten up -- BUT WILL BE VERY NATURAL SOUNDING. The compressed HF of a DolbyA encoding is not really pretty. Also, the kind of compression used on DolbyA causes a flattening of the spatial depth. The detector used is NOT RMS, but is a kind of peak of averages -- and the linear realationship between the attack/decay and the gain control causes an extreme flattening. If there was a mathematical squared operation before the attack/decay, it would sound more like RMS -- even then it flattens because compression tends to do that.
So, to 'unflatten' -- this decoder (which is really a very special purpose expander) can give the depth back.

The reason why I mention the depth is if you listen to "MakeItWithYou" and "BabyImAWantYou", you'll notice the raw (undecoded--raw) version to have a lot of compressed treble (luckily fairly clean), but sounds spatially flat. The DAdecode version loses a lot of that brighteness (that is where a bit of EQ or tone control might help) but the result has depth restored to it. The music sounds much more real after being decoded.

I did not do EQ on the results, basically giving the raw sound from the material. If I was going to produce a listening copy, I would add about 1.5dB at 6.5kHz -- if I add a bit more either at 6.5kHz or 9kHz, I'd probably take it back down at 12kHz. That will bring the brightness to the level that one might expect during casual listening.

Because the decoder has SO LITTLE distortion, EQ can be done fairly freely after doing the decode operation. Earlier versions (even with significant intermod mitigation) still had too much distortion per my taste -- probably less than a real DolbyA in many regards even before the very major recent improvements.

Nowadays we expect ZERO distortion -- and that is incredibly difficult to do on a DSP technology dynamics processor -- none of the textbook rules are adequate, and a simple emulation of even an RC attack or decay time constant wil give you a lot of hash and distortion products.

So, enjoy the processor -- I expect for it to be used and enjoyed. It is still an active development, and I have a few more ideas for improvement.

John

John S Dyson Tue, 05/08/2018 - 06:14

John S Dyson, post: 456873, member: 50354 wrote:
So, enjoy the processor -- I expect for it to be used and enjoyed. It is still an active development, and I have a few more ideas for improvement.

John

Good and bad news here -- I have just completed an improvement (okay, bugfix), and initial results show that a critical problem is indeed fixed. There was a problem with the FB to FF conversion of the decay time behavior, and the calculation was basically using the wrong variable for input. That variable was ALMOST the correct one, and easy to make mistake. After this fix, low level (usually below -6dB gain) signals will now have more correct decay behavior. Right now, the decay is just a little too fast, and certain material will manifest some gating or stereo image problems. (Basically, the decoder becomes dynamically inaccurate.)

The fix seems good -- and difficult material with the problematic characteristics and also other kinds of problems seem to be okay now. I wouldn't have posted about the fix if it wasn't working very well so far. The next step is testing various kinds of normal material.
The ETA is probably early tomorrow morning USA EST time.

John

John S Dyson Tue, 05/08/2018 - 18:41

The new decoder version is ready. It has the important decay time correction -- stupid mistake, but also has a really nice intermodulation distortion mitigation improvement. Suffice it to say, the sound is almost impossibly smooth (but detailed) for a DolbyA compatible decoder. The anti-intermodulation code has been better organized and has two preferred modes:

the default is --ai=med (very safe, but still lets a little intermod through),
my favorite is --ai=high (I am finding myself enjoying the clarity of the music in this mode.)
most of the anti-intermod can be disabled (leaving only some very limited quality enhancements): --ai=off. To be evil and totally disable the enhancements beyond the sophisticated detector use --ai=none.

When the anti-intermod is disabled, you might notice more fuzz that are really fake highs. Sometimes that 'crunch' or 'shhh' sound is beneficial, but I have more often found it to be irritating.

More info is in the file DecodeA.pdf on the repository. THE REPOSITORY DEMOS HAVE NOT BEEN UPDATED, but the program (of course has).
Program name: da-win-08may2018A.zip

Repository: https://spaces.hightail.com/space/tjUm4ywtDR

Please enjoy!!!
John

John S Dyson Tue, 05/08/2018 - 21:07

John S Dyson, post: 456893, member: 50354 wrote: The new decoder version is ready. It has the important decay time correction -- stupid mistake, but also has a really nice intermodulation distortion mitigation improvement. Suffice it to say, the sound is almost impossibly smooth (but detailed) for a DolbyA compatible decoder. The anti-intermodulation code has been better organized and has two preferred modes:

the default is --ai=med (very safe, but still lets a little intermod through),
my favorite is --ai=high (I am finding myself enjoying the clarity of the music in this mode.)
most of the anti-intermod can be disabled (leaving only some very limited quality enhancements): --ai=off. To be evil and totally disable the enhancements beyond the sophisticated detector use --ai=none.

When the anti-intermod is disabled, you might notice more fuzz that are really fake highs. Sometimes that 'crunch' or 'shhh' sound is beneficial, but I have more often found it to be irritating.

More info is in the file DecodeA.pdf on the repository. THE REPOSITORY DEMOS HAVE NOT BEEN UPDATED, but the program (of course has).
Program name: da-win-08may2018A.zip

Repository: https://spaces.hightail.com/space/tjUm4ywtDR

Please enjoy!!!
John

THE VERSION THAT I JUST POSTED IS BROKEN. Fix coming on the 9th. Swimmers ear helped me make a bad decision (loss of HF above 12kHz -- sounds like a telephone.)

John S Dyson Wed, 05/09/2018 - 07:49

John S Dyson, post: 456894, member: 50354 wrote: THE VERSION THAT I JUST POSTED IS BROKEN. Fix coming on the 9th. Swimmers ear helped me make a bad decision (loss of HF above 12kHz -- sounds like a telephone.)

The version in the repository da-win-09may2018B.zip has the corrected filter. I had a mixup of my intermittent hearing, and an unfortunate interaction between a VonHann window with a bandpass filter and too few taps that chopped off too much of a needed frequency range. The problem is that the filter had a fairly low frequency cutoff compared to the sample rate and tap count -- the Von Hann apparently messes up the independence of the high and low parts of the bandpass...
Bottom line -- problem fixed with a bit of a compromise -- sound quality corrected, but the anti-aliasing slightly negatively impacted.

All in all, the quality is amazing (very smooth sound.) The best decoder modes are --ai=med, --ai=high and even --ai=max. --ai=max is actually good enough for all but the most critical work, while I would suggest --ai=med for pro applications ( a bit more intermod, but less freq response oddities.) Even with the filter error, the sound was very smooth but had a bad HF cutoff. Now, that HF problem is corrected.

The repo is the same as before.

Deeply sorry for any wasted time or bandwidth.

John

John S Dyson Wed, 05/09/2018 - 21:32

Got a proven good version now (had another set of ears to verify.) I simplified some things, and even included the fixed version of the broken routines causing me troubles!!! This is worth trying -- incredibly crisp (none of that compressor/expander fuzz.) You can get some of the fuzz by a debugging option as described in the DecoderA.pdf file. The program resides in the same repo, it is the da-win-09may2018F.zip verison (note the 'F'.) Too many versions for one day... Included in the file is the broken routine -- punishing it by giving it away :).

It sounds the way that I planned. it is REALLY worth trying.

REPO: https://spaces.hightail.com/space/tjUm4ywtDR

John

John S Dyson Fri, 05/11/2018 - 07:42

Just wanted to update you'all. Been getting good response on this last release of the decoder. Also, I uploaded new versions of the demos based upon the da-win-10may2018B version of the code. Got really glowing reports -- the current work is pretty much stabilized for right now, only some gilding the lily going on. The next step is a code speedup, because it barely runs realtime on 1 core of a 4 core Haswell 4770 processor. I want to get it up to at least 2x realtime on my Haswell, or 1x realtime on a Silvermont/Atom processor. It is especially nice to be able to run realtime so that the threshold is easier to adjust (run music realtime through decoder, change threshold, run music again, change threshold, etc.) Almost always, it seems like pre-recorded music sits between -15.25 and -15.90 inclusive. It seems like most music is on either end of the scale, where some is at -15.25 and others is at -15.75. A locally recorded (by a pro) recording seems to be about -14.50, but that is NOT meant to be direct to consumer. Sometimes (with certain kinds of dyanmics) the threshold likes to be set within at least 0.25dB, and perhaps as fine as 0.10dB, but most music doesn't test the system that much. Also, sometimes earlier bugs made the threshold very sensitive (having to do with a slightly long release time.) The longer release time messed up the wiggle room, but now I have mitigated the problem. (The longer release time was actually due to an attempt to further filter the gain control signal, but since the latest detector improvements, that extra filtering is no longer needed.)

As it is now, there is no 'superstition' in the code -- it is working so well that unplanned work-arounds are pretty much gone.

Back to the 'threshold' or 'DolbyA tone level' apparently this consistent threshold maps to a common practice situation where the material is encoded to certain standards when being delivered. Not really sure about that, but the consistent threshold needs some kind of reason.

When you do run the program on music, there are a couple of indicators that material is NOT DolbyA encoded -- there is a terrible problem with spatial relationships (for example, Left or Right sometimes change volume at the wrong time, or instruments/voices move too much), also an excessive loss of treble. The loss of treble is difficult to determine sometimes -- because it is expected that there SHOULD be a loss of treble when running the decoder, but material that isn't encoded tends to loose WAY TOO MUCH treble. Another indicator is that the threshold change has not ability to make a significant improvement.

When the threshold is incorrect, say, between -15.00 and -15.75dB, the problem is not usually severe -- just a few odd effects (gating at low levels or surging at high levels.)

ONE MAJOR HINT ON SUCCESSFUL USE -- NEVER ADD/REMOVE GAIN between the recording source file and the decoder, unless you modify the decoder gain to compensate. This is a critical thing, because LEVELS ARE IMPORTANT. A willy-nilly change in gain of 3.0 dB is totally destructive unless you compensate for that gain change by using the --ingain swtich. So, if you need to decrease the incoming gain by 3dB (maybe worried about clipping somewhere), then you need to use the --ingain=3.0 switch which will compensate for that 3dB loss.

Sincerely,
John

kmetal Fri, 05/11/2018 - 08:50

John, to be honest this all goes well above my technical ability. Like many other things, I continue to follow along, wether it makes sense to me, or not. So I may be quiet, but I am lurking in the shadows. Cheers man! The amount of technical knowledge and technique it takes to be in your realm is intriguing.

John S Dyson Fri, 05/11/2018 - 10:47

kmetal, post: 456933, member: 37533 wrote: John, to be honest this all goes well above my technical ability. Like many other things, I continue to follow along, wether it makes sense to me, or not. So I may be quiet, but I am lurking in the shadows. Cheers man! The amount of technical knowledge and technique it takes to be in your realm is intriguing.

Hey -- I wanna make this useful to almost anyone with a PC computer and wav files :-). It is my fault that I don't do graphics programming -- which I do know that it isn't difficult, it is just that I am so focused on the stuff that I do (the audio processing, operating systems, I/O drivers, etc.) And usually I work on Unix like systems (e.g. FreeBSD -- which I partially wrote, Linux -- an 'enemy' of mine wrote (not really an enemy), and other various things like Solaris, etc.)

Basically, to use the program, you have to unpack it. It is in a zip file, nowadays you can just click on it with windows to unpack it.
Next -- the easiest way to use it is to go into command.com mode (use the command line) and change to the directory that has the decoder in it.
Grab whatever .wav file you want to check out to see if it is DolbyA encoded (usually stuff from the late 1960's through late 1980's) Typically it is from a CD or a download that has that ugly 'harsh' digital sound as they used to call it.

The program will work with regular CD wav files (not ideally, but will work -- and if needed the result will sound better than the original no matter what), and so you grab the .wav file for input, and issue this kind of command:

da-win --info --thresh=-15.50 outfile.wav

da-win is the least common denominator program, but newer machines will work faster with the 'da-avx' version of the program instead.
Notice the less than and greater than signs? The filename next to the less-than sign is the input file (the wave file that you start with), and a new wav file will be created with the name after the greater-than sign.
It will take approximately the same amount of time as the music plays. With the --info switch that I showed in the example, it will give a running status of the decoding operation (kind of keep you from thinking that something is wrong.)

That number '-15.50' after the --thresh= switch is a setting that is dependent on the recording itself. usually, the ideal number is between -15.25 and -15.90, but -15.50 is a good start. With the setting being off, it won't sound perfect, but you can increase or
decrease it by 0.25 to get close enough.

Good luck if you do try (best wishes anyway :-)), but it is my fault that I just don't do GUI programming, and don't have time to muddle through it right now while working on the details of these audio projects.

kmetal Fri, 05/11/2018 - 13:33

I checked out the sample of ‘bicycle’ and immediately noticed a difference and volume as well as tone. This is on an iPhone 6/ Firefox. If louder is better, and frequency response changes with gain changes, wouldn’t the decoded example have to preserve the gain staging or level for an accurate listening test?

Is that what your addressing here:

John S Dyson, post: 456932, member: 50354 wrote: ONE MAJOR HINT ON SUCCESSFUL USE -- NEVER ADD/REMOVE GAIN between the recording source file and the decoder, unless you modify the decoder gain to compensate. This is a critical thing, because LEVELS ARE IMPORTANT. A willy-nilly change in gain of 3.0 dB is totally destructive unless you compensate for that gain change by using the --ingain swtich. So, if you need to decrease the incoming gain by 3dB (maybe worried about clipping somewhere), then you need to use the --ingain=3.0 switch which will compensate for that 3dB loss.

John S Dyson Fri, 05/11/2018 - 14:05

kmetal, post: 456944, member: 37533 wrote: I checked out the sample of ‘bicycle’ and immediately noticed a difference and volume as well as tone. This is on an iPhone 6/ Firefox. If louder is better, and frequency response changes with gain changes, wouldn’t the decoded example have to preserve the gain staging or level for an accurate listening test?

Is that what your addressing here:

The problem with a lot of the music normally distributed is that they have left the DolbyA encoding intact. That means a form of compression that makes the highs unnaturally harsh, the spatial relationships flattened and just not natural sounding like the original recording (when the recording is "natural" like with the Bread songs.) The DolbyA encoding (damage) can only really be detected with headphones or a good speaker system. If listening on a normal computer speaker, the DolbyA encoded version might actually sound better (clearer) because it artificially boosts the highs.

Think of this -- back in the days of tape, they used a system called DolbyB or DolbyC or even DolbyS. Those systems boosted different parts of the audio spectrum (mostly higher frequencies) and made the sound a little harsh. People were used to the systems -- similar in some limited ways like DolbyA. When the tape played back, the recorder would usually have the Dolby(A,B,S) decoder engaged, and the harshness/damage would go away. The consumer dolby systems were pretty much automatic to use. Sometimes when playing the consumer Dolby tape without decoding, people would notice the harsh sound -- and sometimes it might be beneficial in some instances.

The pro Dolby system (DolbyA and later DolbySR) were not quite as automatic, especially when exchanging tapes. I suspect that the master tapes were copied to digital media many years ago without decoding. It is typical to leave the DolbyA encoding on whenver exchanging, then doing the final decode at the endpoint. I don't think that it had been part of the normal procedure in producing the digital media being sent to the consumers that the DolbyA decoding would be done. This MIGHT have been the genesis of the idea that early CDs sounded harsh, and people got used to it. However, when being played on a good system, the decoded version of the music can knock your socks off -- I had one person mention that they were crying that the music was so pretty after being properly decoded.

I am trying to help give the listening community a gift of really really pretty music -- equivalent to a master tape for many of their recordings. It isn't necessary to use MY DolbyA decoder, I suspect that in the future there will be more being written (perhaps an alternative design, or someone who just wants to do it) to give more people the gift of REALLY listening to the music for the first time.

My decoder is free for consumer use, and is the best that I can do to help.

John

kmetal Fri, 05/11/2018 - 15:06

John S Dyson, post: 456945, member: 50354 wrote: Think of this -- back in the days of tape, they used a system called DolbyB or DolbyC or even DolbyS. Those systems boosted different parts of the audio spectrum (mostly higher frequencies) and made the sound a little harsh. People were used to the systems -- similar in some limited ways like DolbyA. When the tape played back, the recorder would usually have the Dolby(A,B,S) decoder engaged, and the harshness/damage would go away. The consumer dolby systems were pretty much automatic to use. Sometimes when playing the consumer Dolby tape without decoding, people would notice the harsh sound -- and sometimes it might be beneficial in some instances.

I experienced the odd effects of dbx noise reduction on a comercial master, when I accidentally left the button engaged, on my Tascam 424mk3 portastudio. Is this in a very general sense, similar to what your describing but at the pressing plant/mastering, or the end listeners system? It was the mid 90’s before I started buying my own music, for my magnavox boom box, so I don’t know a whole lot about the early era of cds.

As far as your decoder, if i imported both examples (decoded/raw) into my Daw, (samplitude pro x3), does it matter if I adjust the channels to a subjectively equal level? I’m just curious. I’m getting the vibe that your decoder is more about enjoying things rather than a purely technical achievement?

I work out of Normandy Sound, which has a super fun LEDE style control room. It’d be interested to check out your examples up there. Having opened in ‘78, I’m guessing the Dolby varieties your talking about were commonplace for a period of the studios history.

Would it be fair to compare your decoder with say a franhaufer/3rd party codec, as opposed to using the standard iTunes codec? Are you essentially modifying the decoding process? I guess what I’m trying to ask is, is the problem in the quality of the codec itself? Or was the problem the presence/lack of presence of decoding on the cd or listening system?

Thanks for your patience, it looks like your involved in some really cool projects.

John S Dyson Fri, 05/11/2018 - 16:02

kmetal, post: 456946, member: 37533 wrote: I experienced the odd effects of dbx noise reduction on a comercial master, when I accidentally left the button engaged, on my Tascam 424mk3 portastudio. Is this in a very general sense, similar to what your describing but at the pressing plant/mastering, or the end listeners system? It was the mid 90’s before I started buying my own music, for my magnavox boom box, so I don’t know a whole lot about the early era of cds.

As far as your decoder, if i imported both examples (decoded/raw) into my Daw, (samplitude pro x3), does it matter if I adjust the channels to a subjectively equal level? I’m just curious. I’m getting the vibe that your decoder is more about enjoying things rather than a purely technical achievement?

I work out of Normandy Sound, which has a super fun LEDE style control room. It’d be interested to check out your examples up there. Having opened in ‘78, I’m guessing the Dolby varieties your talking about were commonplace for a period of the studios history.

Would it be fair to compare your decoder with say a franhaufer/3rd party codec, as opposed to using the standard iTunes codec? Are you essentially modifying the decoding process? I guess what I’m trying to ask is, is the problem in the quality of the codec itself? Or was the problem the presence/lack of presence of decoding on the cd or listening system?

Thanks for your patience, it looks like your involved in some really cool projects.

Yeah -- DBX was much more aggressive than DolbyA -- even though much of the time for real world work the DolbyA was better. When you play a DolbyA encoded tape, it is just sightly over compressed and a little harsh in the HF, but DBX is evilly aggressive. DBX uses LOTS of compression at a moderately high compression ratio for a very wide dB range. DolbyA works over about a 30dB range, but only changes the gain by either 10dB (for 20-80Hz, 80-3kHz, 3kHz-20kHz) and 15dB for 9kHz-20kHz. Those bands are not actually absolute, but those are the ranges of the filters (the filters have very wide skirts -- so 80-3kHz range actually affects the signal to above 12kHz!!!) DBX can easily swing the gain by 30dB. Also, since DBX is a single band -- when you listen undecoded, the various components of the sound tend to cause lots of pumping. DBX was never meant to be tolerable when listening undecoded. Most of the earlier Dolby systems were somewhat tolerable - but certainly unideal to playback and listen without decoding. Even considering that DBX is a more aggressive system (with some significant foibles), it is actually easier to write a software emulator. The single band nature, somewhat simple attack/decay profile, and constant compression/expansion ratio would make a DBX decoder a relatively straightforward project. DolbyA is a much more complex beast -- multiple bands, each filter has different specs, odd/nonlinear expansion curve, mix of the two HF channels, and the worst -- an impossible-to-software-emulate feedback compressor/expander scheme. So, the implementation of the software DolbyA decoder was very challenging -- there was no template to design against. None of the hardware implementations could be software emulated (not only for legal reasons, but also for mathematical impossibility.) There were some patents that supposedly would do a DolbyA decoder, but there were both incomplete and would never really produce a pro-quality decoder by todays standards. I believe that the Sony patent version could be made to work, but I doubt that they would ever make it perfectly track a real DolbyA unit without some other 'trade secrets.'

Since DolbyA doesn't really affect the general level very severely, as long as you compare the material with approximately the same levels (I am speaking of comparing a raw copy with a decoded copy), then all is okay -- you CAN do an accurate comparison for your own listening taste and enjoyment. If you are speaking of USING the decoder, it is VERY critical to keep the levels the same. In fact, the version of the RAW copies on my repo are NOT of the correct level. My mp3 converter automatically does a level normalization, but for audible comparison reasons that is okay. I tried to keep (for audio comparison reasons) the levels of the RAW and decoded version approximately the same. IF YOU WOULD LIKE A SAMPLE OF THE RAW/UNDECODED VERSION WITHOUT THE LEVELS BEING MOLESTED, let me know, and I'll send you a few samples. Or, you find a few examples from old CDs that might be sitting around :).

The DolbyA emulating decoder is NOT a codec, but rather similar in some ways as an audio agc compressor (the device that can be made to make commercials loud, or music always be constant volume.) The difference is that this expander undoes the effects of a DolbyA unit. A DolbyA unit can be used to encode the music by making parts of it louder in a controlled way. Historically, a device that helps make audio louder by automatically controlling the gain was called a 'compressor' -- that is NOT the same as an MP3 software compressor, which compresses the space/time need to transfer the music to another place. The kind of compressor/expander associated with this project is more related to SIGNAL LEVEL compression and SIGNAL LEVEL expansion.

So, there are at least TWO kinds of audio compressors -- one is a LEVEL compressor (or expander), and that is what this project is partially about.
The other kind of compression is related to compressing SPACE (like disk space or network bandwidth -- things like that.)

Nice that there is a confusing usage? :).

John

kmetal Fri, 05/11/2018 - 17:08

John S Dyson, post: 456947, member: 50354 wrote: DBX decoder a relatively straightforward project. DolbyA is a much more complex beast -- multiple bands, each filter has different specs, odd/nonlinear expansion curve, mix of the two HF channels, and the worst -- an impossible-to-software-emulate feedback compressor/expander scheme. So, the implementation of the software DolbyA decoder was very challenging -- there was no template to design against. None of the hardware implementations could be software emulated (not only for legal reasons, but also for mathematical impossibility.) There were some patents that supposedly would do a DolbyA decoder, but there were both incomplete and would never really produce a pro-quality decoder by todays standards. I believe that the Sony patent version could be made to work, but I doubt that they would ever make it perfectly track a real DolbyA unit without some other 'trade secrets.'

As an audio engineer, the only multi band compression plugin I’ve used that doesn’t have flagrant phase and depth anomalies, is the fabfilter offering. In the heyday of terrestrial radio, there was also the stations various broadcast compression and limiting too. I can imagine a lot of mixers cringing at the presentation of their work in that light.

I could see a general use for the multi band expansion section of your emulator, being useful in tracks the suffer from over compresssion, or in a mastering scenerio to bring transients back.

John S Dyson, post: 456947, member: 50354 wrote: IF YOU WOULD LIKE A SAMPLE OF THE RAW/UNDECODED VERSION WITHOUT THE LEVELS BEING MOLESTED, let me know, and I'll send you a few samples. Or, you find a few examples from old CDs that might be sitting around :)

There’s no doubt I’ve got a few hundred CD’s in a box, waiting for the Plextor / Dbpoweramp monster.

How can I tell which cds need the DolbyA decoder emulation? My collection spans most genres and era’s from Charlie Parker, to Cannibal Corpse.

I also find generally fascinating that the tools in mathematics are remarkably far reaching, but still have a hard time with certain physical properties. Having built several studios over the years, the relationships between numbers, theory, and physical reality interests me, in all the various aspects of the acoustics and electronics.

John S Dyson Fri, 05/11/2018 - 20:03

kmetal, post: 456948, member: 37533 wrote: As an audio engineer, the only multi band compression plugin I’ve used that doesn’t have flagrant phase and depth anomalies, is the fabfilter offering. In the heyday of terrestrial radio, there was also the stations various broadcast compression and limiting too. I can imagine a lot of mixers cringing at the presentation of their work in that light.

I could see a general use for the multi band expansion section of your emulator, being useful in tracks the suffer from over compresssion, or in a mastering scenerio to bring transients back.

There’s no doubt I’ve got a few hundred CD’s in a box, waiting for the Plextor / Dbpoweramp monster.

How can I tell which cds need the DolbyA decoder emulation? My collection spans most genres and era’s from Charlie Parker, to Cannibal Corpse.

I also find generally fascinating that the tools in mathematics are remarkably far reaching, but still have a hard time with certain physical properties. Having built several studios over the years, the relationships between numbers, theory, and physical reality interests me, in all the various aspects of the acoustics and electronics.

First -- the math -- yep , some of the math is not obvious, but also it is nice to have programs (all self-written) that do most of the hard work :-).

WRT detecting DolbyA on disk or download:

Without a practiced ear, it is kind of tricky to figure out which CDs might be encoded. Perhaps the first hint might be that it sounds like it might have unnaturally hyped high end. I have a Brasil'66 disk (Sergio Mendes) that doesn't sound too bad, but when decoding it -- it opens up into beauty. We are all (including me) so very used to the overhyped sound of DolbyA encoding, it is hard to tell until one gets experienced.

I can describe the effect of a DolbyA compression -- it is a compressor that is faster than one would want to use for any sane musical purpose -- it is practically the fastest compressor that one can use on music without the music iteself intermodulating with the gain. One might think that a super fast attack/decay could be used at high frequencies, but that is not true. DolbyA hits the practical limit of 2msec attack/60msec decay time at low frequencies and 1msec attack/30msec decay at high frequencies. The only reason why faster attack/decay isnt used is that the circuitry needed to support faster attack is more intricate than what would benefit, and also the attack time is about as fast as the ears really need. So, any overshoot resulting from DolbyA compression is just clipped by the DolbyA compression. If he used faster attack, he would have had to use a dynamic attack/decay scheme where the decay would accelerate with the faster attack so that there wouldnt be weird ducking effects of fast transients. That is one reason why he clips instead of a dynamically fast decay. Theoretically, the dynamic approach is slighlty higher quality than clipping, but thems the breaks.

So, with such fast attack decay, the music becomes noticeably denser, but also the spatial relationships (the perceived depth) becomes compressed into near flatness, and the sense of L and R become squeezed or moves because of the indiependent compression on each side.

When the super fast attack/decay isn't hidden from perception, there is always the possibility of the obvious compression sound at high frequencies.

There is one more effect -- doesn' t happen very often except on certain natural sound recordings -- the bass is distorted. It is kind of an ugly effect, and hard to describe. If you listen to the Scarborugh Fair example (raw version) you might be able to perceive it, but the distortion is there.

So -- 4 hints: Compressed spatial relations, compression sound of mostly high frequencies, an unnatural hype of high frequencies -- sometimes hard edge on voices, and distorted bass.

These don't sound like major issues, but when added all together, there CAN be a major improvement by doing a decode, but when listening on normal computer speakers -- non-decoded material just might sound better!!!

John

John S Dyson Sat, 05/12/2018 - 04:27

I have gotten questions about the DolbyA decoder, and why I didn't do a DolbyA encoder also? Well -- there are at least two reasons, but the first and most important is that most of the problem is to copy music from old archives and be able to use it (process/mix/finalize/produce) with current technology. The second reason is that DolbyA is so much weaker NR than more recent technologies, that I don't suggest using it for encoding. Perhaps I could do an encoder that produces much less distortion than anything else (like the decoder has less distortion than anything else), but why encode into DolbyA? If/when I do a DolbySR decoder, then an encoder with that technique might be more useful. The big problem with SR is that it is incredibly more complex than DolbyA.

Please note that this DolbyA compatible decoder truly sounds similar to a real DolbyA (sans fuzz, distortion, lack of clarity) with a very similar freq response balance. (Even if someone uses the cat22/360/361 as a design basis in digital form, the result will not likely sound similar because the filters don't emulate well. I found that my decoder would sound similar (but cleaner) to another known DolbyA decoder if I used the DolbyA design as a reference.) I rejected those filters -- and I was worried that I could come up with something compatible that sounded more accurate, but I was lucky to find a better solution.

Doing a REALLY compatible/similar sounding DolbyA encoder would be a project similar in scope to the decoder project, but with even fewer users and less usefulness. Frankly, even if I had a reel-to-reel deck, I would NOT bother encoding anything into DolbyA form. DolbyA MIGHT be more useful than SR for very long term archival purposes (because of the simpler DolbyA decoder design), but still -- I'd try to find something BETTER QUALITY than DolbyA. Knowing what I know now -- I am somewhat suspect of the quality of any of the dynamic gain schemes which can-not mathematically be reversed, and DolbyA (however being close to being reversable) is not reversable enough.

My criteria for a long-term analog compatible NR (and possibly dynamic range extension) system would be a constant compression ratio, multi-band system with mathematically designed analog&digital compatible characteristics for the filters. At least, if properly executed and working on a deck with a fairly flat response, it will be totally reversable (distortion products even better cancelling.)

So, what I suggest if someone wants to use a nearly analog compaitble NR system, it would be closer to the HI-COM (afair -- not sure) type design. The multi-band approach is good, but the DolbyA filters are kind of finicky, and I'd rather see the system designed from specification rather than HW design. If there is an interest, and it would be really used if it works -- I could do a rough specification, and an implementation of both a HW and SW compaible design that has the best features of DolbyA and DBX. One good thing about a compatible SW design is that it can be prototyped using software modules that act very similarly to real hardware. For example, I'd base the design on dB linear technology (like THATCORP) stuff, and use standard filter design techniques that can be implemented in HW & SW, e.g. well constrained IIR filters that emulate well in HW. FIR filters can be more ideal, but are also not easy to emulate in HW. After doing a rough design and a SW implementation (probably 2X easier than my DolbyA effort), then a real HW design could be started. Before that, I'd do as much of a spice simulation as possible.

The end result of such an effort would be at least 25dB NR, almost no level sensitivity, almost no modulation-type noise, very good transient response, much better distortion than almost any other system. Also, encoding/decoding could be done on computer or in hardware, and the result of the encoding could be designed to be listenable. So, it would have all of the advantages of DBX, DolbyA and DolbySR, and almost none of the disadvantages of any.

But doing new DolbyA encoding operations are only useful in museums where there are demos of ancient technologies :-).

John

cyrano Mon, 05/14/2018 - 02:43

Thanks for this thread, folks. And thanks for your efforts in programming this tool, John!

It solved another mystery for me. I suffer from hyperacusis (and tinnitus) and am very sensitive to the 6-9 kHz range. It makes listening to some CD's impossible. I always wondered why one CD was fine, while another one wasn't. And I think this is part of the explanation, even if I haven't a clue about how many CD's are affected.

Any chance of a binary for BSD, Linux or OSX?

John S Dyson Mon, 05/14/2018 - 03:48

cyrano, post: 456985, member: 51139 wrote: Thanks for this thread, folks. And thanks for your efforts in programming this tool, John!

It solved another mystery for me. I suffer from hyperacusis (and tinnitus) and am very sensitive to the 6-9 kHz range. It makes listening to some CD's impossible. I always wondered why one CD was fine, while another one wasn't. And I think this is part of the explanation, even if I haven't a clue about how many CD's are affected.

Any chance of a binary for BSD, Linux or OSX?

Heh -- don't know if you know this, but I wrote a big part of the original FreeBSD kernel (look at copyrights in the /sys/vm directory)... :). The AT&T/Berkeley agreement took away about 1/3 of the BSD kernel from us, but two of us (myself and David Greenman) rewrote the missing pieces in 2weeks!!! That was back in the day when I had lots of energy, and I did the work while working full time at AT&T/Bell Labs... Yeah -- I had special permission to do the FreeBSD work while working at Bell Labs. After AT&T trying to scuttle the project, then AT&T/Bell Labs offered me a research grant to do my FreeBSD work!!!, but then the Bay Area tech boom harkened, then that was a complicated odyssey for me after that -- long story. I spent many hours making the VM system on FreeBSD incredibly efficient -- I could make Xwindows work in 4MB (yes, MB, not GB!!!), but of course it was slow. Xwindows could run efficiently in 8MB, but now everyone has 4GB -- 500X more more memory!!!

Anyway -- I am mostly using Linux just as a platform nowadays -- the testosterone thing on FreeBSD was overshadowing doing good work, so I started working on other things.

Linus was always a jerk, TRYING to misinterpret what I'd write, but I am using his baby right now -- Linux.

All of my new development is on Linux, and so I am able to add the unexciter to the distribution also. Since you mentioned the 6-9kHz range, the DolbyA compatible decoder does some special things to fix the problems with the DolbyA encoders mashing up the 9.5kHz range (approx +-500Hz.), and also the unexciter helps to remove the messed up freq response from the Aphex Exciter.

The best way typically to use the unexciter is to simply pipe to/from the program, and use a command like this: "unex-avx --dr=0.156". It will help to make the sound more natural, but unfortunately it loses between 6 to 9dB of signal, so you'll have to make that up.

The decoder is used exactly the same way as the Windows version, so those docs apply equally. The program name is normally 'da-avx', but if you want to use the non-shared-lib version, then da-avx-nosharedlib. The program filename is NOT magic, so you can rename it if you wish.

You can use the decoder on any recent CPU (i3-i9 only, no Atoms). I am not currently building the Atom version on Linux. I need to fix some Makefiles to do that. So, it needs CPUs like the 4770 (Haswell) or better -- any cpu with the AVX instruction set. It might work on the 3000 series like the 3770, but I am not sure.

Wanted to make this available as soon as possible, so I posted it on the distribution repository. The programs can run anywhere in your PATH, and also I included both shared lib and non-shared lib versions, so that if your shared libs aren't compatible, but your kernel is, then you can just use the big binary with everything in it. The program was built with clang/llvm, but doesn't need any of that on your system (clang is producing significantly better code for my vector stuff than GCC.) If you use the shared lib version, it doesn't require any very special shared libs like sox or anything like that, so it should just hopefully work for you without needing the nosharedlib version.

To unpack, simply do the command: "tar xovf dadistLinux.txz", and it will produce an underlying directory: dadistLinux, with the 4 binaries in it. Pick the two binaries that work for you -- one each of the da-avx/da-avx-nosharedlib and unex-avx/unex-avx-nosharedlib. ONE MORE THING -- always run the DolbyA compatible decoder on the file directly through sox, where you might want to do a 'gain -3' on the sox input. Then, on the decoder, use the '--ingain=3' to compensate. Also, if you use the unexciter (unex-avx), use it AFTER doing the DolbyA decode.

The dadistLinux.txz file resides on the distribution repository: https://spaces.hightail.com/space/tjUm4ywtDR

It is NOT release controlled at all -- so I won't know what is in there a week from now, but at least this should be a version that works VERY well!!!

John

John S Dyson Tue, 05/15/2018 - 15:26

There is a new version (again) on the repository -- it is about 20% faster on AVX capable machines. I haven't uploaded a Linux version yet (just Windows), will do Linux soon, but the Linux version always seems to run better than the Windows version anyway. The up-to-date file has 15may in the filename. I should upload the Linux version in a few hours (probably should have done the Linux version first -- Linux is the environment where I do development -- it is faster/more fluid.)
Other than speed, the biggest technical difference is that the --ai=max mode is a bit more aggressive yet at removing the intermod. If you wanna hear how bad the intermod sounds -- try --ai=none --raw=8192 for the switches. It disables all of the special intermod handling. You'll notice that the sound is more 'crisp'. It has that 'digital processing' sound, and it is mostly due to to two things: direct mixing of the gain control signal with the audio (but the gain control signal has a wide spectrum, so it splats sharpness all over), and also the sidebands mixing with the sample rate -- causing even more aliasing. That kind of sound is even worse sounding because of the wrap around to lower frequencies. My decoder removes by far most of the mathematically unnecessary parts of the intermod (esp the --ai=high and --ai=max modes.) The default --ai=med mode is less aggressive -- doing fewer additional special operations, so there are a few more intermod products left in. Eventually --ai=high will likely be the default mode. --ai=max is mostly meant for very hard sounding music that has lots of dynamics and high frequencies, where the minor disadvantages of --ai=max are less bad than the intermod itself.

John

John S Dyson Wed, 05/16/2018 - 12:39

Good news -- sort of -- have an improved version for you again. Someone on another forum turned me on to Howard Jones (entertaining music), and he was asking me if it was DolbyA encoded. I replied yes -- but internally I thought that there was a minor problem somewhere. I eventually told him that I thought that it was 99% chance DolbyA, but still something bothered me. I did some more listening, and figured out the problem. There was an attack time problem caused by too aggressive anti alias code (in all modes.) So, I reverted the code to be less aggressive -- but probably technically more correct. The problem was that at low levels -- in the 80-3k band, the attack was too sluggish. This corrects that problem entirely.

The filename is da-win-16may2018A.zip.

The code resides again on the following repository: https://spaces.hightail.com/space/tjUm4ywtDR

kmetal Mon, 05/21/2018 - 10:00

Got me some new copies of GnR, Creedence, and Justin Timberlake, cds at the bargain store. I thought creedence and gnr might have used Dolby, besides being great albums. I need some reference commercial masters for some things I’m working on, I’d be curious how the Decoder effects the sound.

John S Dyson Mon, 05/21/2018 - 11:32

kmetal, post: 457067, member: 37533 wrote: Got me some new copies of GnR, Creedence, and Justin Timberlake, cds at the bargain store. I thought creedence and gnr might have used Dolby, besides being great albums. I need some reference commercial masters for some things I’m working on, I’d be curious how the Decoder effects the sound.

I cannot give you anything with tones (someone else loaned them to me under NDA), but a fairly typical example resides on the distribution repository -- cdemo.mp3 is the processed version. corig.mp3 is the original -- DolbyA encoded version. Let me describe:

The original has lots of excess HF -- hard 'edges', and has a kind of ugly background effect. The harmonica is incredibly harsh. (This example is more extreme than some, because its average level is fairly low -- if the material is compressed more, then the DolbyA compression becomes less obvious.)

The processed version probably leaves you a bit 'wanting' until you hear the singing voice. Note that the voice is much more natural, and the instrument is actually cleaner.

The processed version needs a bit more EQ -- but even then it sounds very natural. 3-4.5dB of EQ is much more natural than a bunch of gain that is flopping around 10-15dB within a few 100msec, and is changing as fast as 1msec. So, one has the choice of all kinds of gain changes and ugly distortion products, or a fixed frequency response modification of several dB? :-).

Also, the processed version can be made to be a bit more 'hard edged' if you use --ai=med or --ai=off command line switches, which reverts it back closer to the expected distortion from a real DolbyA.

John

John S Dyson Tue, 05/22/2018 - 06:20

kmetal, post: 456948, member: 37533 wrote:
I could see a general use for the multi band expansion section of your emulator, being useful in tracks the suffer from over compresssion, or in a mastering scenerio to bring transients back.

I just re-read your message -- but in a different mindset. When reading previously, I was totally focused on the DolbyA compatible decoder. However, my mindset is different right now as I have been resurrecting my ABBA collection -- doing a re-DolbyA decode and cleaning up the recordings with various tricks (including expansion.)

You mentioned 'expansion', and I might have an interesting software toy to talk about in that vein. Way before I started on the DolbyA compatible decoder, I have (still are) been developing a really good expander. This expander is almost totally phase linear -- there is just a bunch of multi-band gain changing happening (the processing is WAY more complicated than that -- changing gain has complex consequences.) If there has been a choice between using up my CPU time vs. quality, I have always chosen quality in this design. It has just been updated (slightly) based upon some of the DolbyA compatible decoder work, but the changes are not complete yet. This new kind of expander is very different than normal designs (I mean, REALLY different.) The expander has 8 bands, and all kinds of modes, but the default modes are pretty good most of the time.

I have semi-promised a friend that I would release the expander soon (purely experimental, needing more work/updating), but it is still almost amazing. Detecting NORMAL levels of expansion (when needed) are probably impossible for anyone (really). It auto-adapts over a fairly wide range, so it is fairly easy to use (modulo the very difficult cases that need lots of expansion.) The expander (early version) won't be ready until late this coming weekend, but I can give you an idea that it really only needs two switches: the expansion ratio, and the threshold. Doesn't even need the max/min gain control -- but it does have switches to support for difficult cases. The expander has been in existance for over a year, and I have called it 'restoration processor' (found that to be a poor choice), and 'uncompressor', which I rather like. It is NOT a normal expander.

I do not feel good about opening up my private demo site, but I'll do so for a few days at most. I have some copies of ABBA recordings that needed help, and some 'cleaned up' versions. Note that 'expanding' wasn't the only step, but a combination of DolbyA decode, unexciter, 'uncompressor', and an IMPORTANT special script that I haven't named yet. You'll notice that the music 'opens up', and sounds very natural for ABBA. You can clearly hear the results of both the DolbyA decode and seperate expansion (with experience.)

The original versions have -orig in the filename. The processed/expanded/etc versions don't have -orig in the filename. I don't feel comfortable keeping these open longer than a short time, so give them a listen. If you run into this posting after the repository disappears (probably Thursday/Friday), I'll produce some other demos on request.

PS: I have a much better ABBA DolbyA decoding coming forward in a few hours -- it takes a long time to run the entire corpus. However the demos on the site are indicative enough.

REPOSITORY (temporary):
https://spaces.hightail.com/space/wU6nwJD4bW

John S Dyson Tue, 05/22/2018 - 14:01

kmetal, post: 457079, member: 37533 wrote: That expansion processing sounds really interesting. I tried the link but it didn’t play on my phone this time. If you upload the samples here, it’s generally works well. That way there’s no dead links down the road, and you don’t have to have your private site opened up.

I didn't want to upload large pieces of commercial material. I don't suggest downloading them instead of just listening, but you might try that also. However, if you wait about 8hrs*, I am doing another run (with the decoder running at --ai=max instead of --ai=vhigh), which allows me to open up the high end more with even less distortion (best of both worlds.) Compressors/expanders produce a lot more distortion than people sometimes think (I mean, FAST ones -- not the nice 10msec attack time things, but the fast 1msec and faster produce splats all over the place.) Then, on top of that, the splats alias - no 20k limit for intermod. So -- that --ai=max really helps in difficult cases where music has lots of HF energy. The biggest disadvantage is a little bit of a dynamic high end (not quite correct), but sounds a lot better than the splats (the 'splats' are not all that strong, but just enough to give an edge or softness or fuzz to the sound.)
* I have scripts that automatically run the entire available corpus of ABBA (the best copies only), and that is over 100 songs -- it takes about 1.5Hrs of DA decoding, 3-4hrs of unexciting/expansion and about 30min-1hour of proper ambience restoration. SO, figure about 8hrs for the demos to be updated. I did a quick check -- and they will be even more beautiful (almost 100% sans the old ABBA crunch.)

I'll still have to take them down in a day or so, but it is worth demoing -- maybe I can help motivate people to buy some ABBA CDs to decode/cleanup for personal use. I am going to try to publish the scripts that I used to clean up the material, and I am also offering the various pieces of software.

John

kmetal Tue, 05/22/2018 - 15:24

John S Dyson, post: 457083, member: 50354 wrote: I didn't want to upload large pieces of commercial material.

Right on. I wonder if it’s possible via YouTube, people post full albums all the time. I’m honestly just starting to need to learn about broadcasting and publishing and licensing in much more depth for upcoming projects.

John S Dyson, post: 457083, member: 50354 wrote: Compressors/expanders produce a lot more distortion than people sometimes think

There’s harmonic distortion, phase distortion, waveform modulation, gain staging non linealities, nevemind if it was a hardware compressor. My personal interest has been more with limiters lately, but I spent years working on my compression technique, it’s probably the most complex tool we use in Pro Audio.

John S Dyson, post: 457083, member: 50354 wrote: maybe I can help motivate people to buy some ABBA CDs

I’ll keep my eyes open for some, I don’t have any in my collection. This project is interesting to me particularly because I’m archiving all my personal and commercial works, records, tapes, drives, etc. my next project is to make it all acessable from anywhere via my nas/Plex, which should be painless. Next have so via my website can host virtual recording sessions. I want to minimize the gap between inspiration and fruition. My years at the commercial studios revealed several technological shortcomings in a typical workflow. The goal is remote recording/mixing/writing in as close real-time as possible. I’ve got about 70% of the first basic pro type computers and drives, and 90% of the software as far as media production and possibly even have streaming covered via OSB. I’m starting with basic hardware, will asesss the performance, and scale up to whatever I need, and can attain. I lack networking and code knowledge, and have no promgraming ability. So it’s a process to find out even what I should be investigating. Any input from you on that front is welcome. Nonetheless trans/de/en- coding seems like a necessary evil for the time being, and it’s clear that data compression can have an audible effect. I think post mastering, or broadcast prep is going to sorta be the new era of mastering.

Sorry to veer off. Is it possible for you to upload .wav instead of mp3’s?

John S Dyson Tue, 05/22/2018 - 15:39

Regarding an interest in compressors -- let me know -- I have been thinkng about continuing a conversation on the DSP related forum about compressors. They can be very interesting, and there is a LOT to doing a fast attack compressor and keeping it from splatting cr*p all over the place. For a fun exercise -- try this -- send a big pulsing burst into a compressor at frequency X. Listen to the output of the compressor. You'll pretty much hear what you expect.. Then, filter out the range of frequencies that are being pulsed -- and also be fair to make sure that you also filter out the sidebands --- so with a 200Hz pulsing tone, filter out evertying below 300-400Hz on the output. You might be interested if you hear a lot of nonsense on the output of the compressor. A nice, easy, slow compressor will usually not create very much trouble. Also, a low compression ratio won't cause much trouble. But if you have a fast attack/fast-medium release (say in the range of 1msec attack/250msec release), very often a computer based compressor is going to produce some trouble. There are ALL sorts of combinations to try -- doing a clean compressor (a clean, fast one) is kind of tricky. I suspect that compressor experts will do it correctly, but it isn't intuitvely obvious all of the things that need to be done to implement a GOOD compressor/limiter/expander in software. Slow attack compressors are usually VERY EASY, and every SW example that I have seen is fairly slow. Slow attack compressors are EASY. I am talking about the kind of compressor that has a fast attack -- that is really, really tricky. I can probably tell you 99% of the tricks to make any compressor/limiter and expander NOT have the glitches and significant intermod (intermod is technically necessary, but most compressors/etc put out excess intermod.) SW compressor/limiters/expanders are treacherous.

WRT wav files -- way, way toooo big. I have very limited space on the repo provider... I could possibly fit 1 or 2 flacs on one repo... Flacs are sometimes smaller. Just thinking -- I think 1 wav might be possible -- don't know which one is the most impactful. Maybe one of the famous songs. I would keep that repo private between me and you (like sending it to you individually.)

I also bypassed my build process and put up the latest version of SOS on the repository (give it a try), it is called sossml.mp3, and is a smaller mp3 format at 44100. The EQ is off a little -- it is very preliminary (too much treble -- but is about 1.5dB off.) SEE IF THAT WORKS.

John

kmetal Tue, 05/22/2018 - 17:06

I don’t seem to see a play button, and although I can tap the time marker around, no audio amigo.

In addition to fairly modest settings on a compressor, I find that two or more compressors in series and/or parelell, leaves a lot less artifacts. And at what stages you sum the sources makes an immense difference. Around 2014 the new gen conversion was being heard, daws went 64bit, and engineers learned to sum and bus both itb and hybrid. That was a big turning point in audio, and I believe a noticeable improvement top to bottom as far a recordings. Digital audio is starting to get the hang of the complexities of the mid range frequencies. It’s not painful anymore, and slowly becoming more and more defined. I think m/s processing has also aided the improvement.

1ms attack times leave very little of the signal unprocessed, it really does become the sound of the compressor at that point.

Here’s what it looks like on my phone when I click the link.

Attached files

John S Dyson Tue, 05/22/2018 - 19:15

kmetal, post: 457087, member: 37533 wrote: I don’t seem to see a play button, and although I can tap the time marker around, no audio amigo.

In addition to fairly modest settings on a compressor, I find that two or more compressors in series and/or parelell, leaves a lot less artifacts. And at what stages you sum the sources makes an immense difference. Around 2014 the new gen conversion was being heard, daws went 64bit, and engineers learned to sum and bus both itb and hybrid. That was a big turning point in audio, and I believe a noticeable improvement top to bottom as far a recordings. Digital audio is starting to get the hang of the complexities of the mid range frequencies. It’s not painful anymore, and slowly becoming more and more defined. I think m/s processing has also aided the improvement.

1ms attack times leave very little of the signal unprocessed, it really does become the sound of the compressor at that point.

Here’s what it looks like on my phone when I click the link.

kmetal, post: 457087, member: 37533 wrote: I don’t seem to see a play button, and although I can tap the time marker around, no audio amigo.

In addition to fairly modest settings on a compressor, I find that two or more compressors in series and/or parelell, leaves a lot less artifacts. And at what stages you sum the sources makes an immense difference. Around 2014 the new gen conversion was being heard, daws went 64bit, and engineers learned to sum and bus both itb and hybrid. That was a big turning point in audio, and I believe a noticeable improvement top to bottom as far a recordings. Digital audio is starting to get the hang of the complexities of the mid range frequencies. It’s not painful anymore, and slowly becoming more and more defined. I think m/s processing has also aided the improvement.

1ms attack times leave very little of the signal unprocessed, it really does become the sound of the compressor at that point.

Here’s what it looks like on my phone when I click the link.

You might have to download with the down arrow on the top right hand side. The 'play' arrow is below the bottom bar on my computer. It looks like it would be off screen on your phone.
The hightail site isn't mine, you might have to do a download to try (then delete.) Also, try moving your 'cursor' (finger) around on the screen and see if a prompt of some kind appears. The prompt might
me something with the typical 'play' arrow.

Regarding compression -- yes, sidechain can help to bring up low levels while avoiding squirshing the higher levels.

Also, the real problem with compression isn't 64bit precision, but it is the filtering of the intermodulation products (or avoiding them as much as possible.) Intermodulation was even a problem way back when with older gain control technologies (I mean the 1930's.)
I suspect that a lot of people will design a compressor -- find this nebulous distortion that they can't fix, and do some kind of workaround. IT IS NOT EASY STUFF. A person nowadays (on the computer stuff) must at least be a DSP expert to make anything work really well. It is best to be both a full electrical engineer AND DSP person -- because the math is not as simple as one might guess. The block diagram and a simple FET compressor CAN work, but will be substandard by todays quality standards.

When I was a kid (back in the late 1960s and early 1970s'), I kept getting this splat in the sound -- especially when I made a nice, fast compressor. I found a simple trick, but never really understood the splat until I really studied what was going on. The designer doesn't get a freebee just because they are working with audio -- the fact is the FET compressor acts VERY similarly in a theoretical sense to an RF mixer (that is, something meant to change frequencies -- not linearly mix signals) -- and when you change the gain, you are actually mixing the gain control (in the RF mixer sense) with the audio signal. Imagine the nasty, fast attack signal 'mixing' with the audio -- the answer is: SPLAT!!! Ugly stuff. All simple FET mixers will do that, but there are workarounds in the designs. (Basically, in a FET mixer -- the FET will partially act like the nonlinear mixing element in a superhet receiver.)

Now comes the really nice opto gain control compressor -- hmmmm.... Something happens that helps to minimize the 'SPLAT' naturally in the design. The answer is that the opto itself is relatively slow in changing gain -- that helps to keep the power and the spectrum of the splat minimized. That is probably one of the big quality advantages of an opto compressor. They have other troubles though - like FET compressors, their components aren't really very consistent from part to part -- but I really don't know how bad optos are -- I know that FETS are all over the place. That is where THATCORP comes in with their really stable gain control devices (and others have that character), but they are fast also -- it is important to manage the shape of the gain control waveform in ALL CASES (whoops -- that is one of the tricks.)

These workarounds are what morph a design idea into a professional design.

kmetal Tue, 05/22/2018 - 20:14

John S Dyson, post: 457088, member: 50354 wrote: Also, the real problem with compression isn't 64bit precision, but it is the filtering of the intermodulation products (or avoiding them as much as possible.) Intermodulation was even a problem way back when with older gain control technologies (I mean the 1930's.)

Would you care to eleaborate on this more? When I think of modulation I think of a waveform’s shape being changed. I’d like to try and understand better what your saying here. (I’ll gladly take links to vids or papers to save typing)

John S Dyson, post: 457088, member: 50354 wrote: That is where THATCORP comes in with their really stable gain control devices (and others have that character), but they are fast also

Dbx comes to mind when you mention THATCORP chips, I think I recall reading something related to it sometime.

John S Dyson, post: 457088, member: 50354 wrote: it is important to manage the shape of the gain control waveform in ALL CASES (

It seems to me proper transient response is one of the huge hurdles in such a fast attack compression circuit or algorithm. To try and maintain integrity of the sound during such a rapidly changing, dynamic instance seems really daunting. The same concept facinates me when considering power amp/electrical headroom, or audio power relationships to room acoustics. It seems to me it’s in a simplistic way, trying to reproduce a ratio of sorts. I spend a lot of time thinking about things I don’t understand.

John S Dyson Tue, 05/22/2018 - 20:54

kmetal, post: 457094, member: 37533 wrote: Would you care to eleaborate on this more? When I think of modulation I think of a waveform’s shape being changed. I’d like to try and understand better what your saying here. (I’ll gladly take links to vids or papers to save typing)

Dbx comes to mind when you mention THATCORP chips, I think I recall reading something related to it sometime.

It seems to me proper transient response is one of the huge hurdles in such a fast attack compression circuit or algorithm. To try and maintain integrity of the sound during such a rapidly changing, dynamic instance seems really daunting. The same concept facinates me when considering power amp/electrical headroom, or audio power relationships to room acoustics. It seems to me it’s in a simplistic way, trying to reproduce a ratio of sorts. I spend a lot of time thinking about things I don’t understand.

THATCORP has a DBX history -- it is good stuff. DBX -- not so much in important ways.

Okay think about this You have an audio signal (simple one) sin(xt) (t is time). Now, assume you want to control a signal -- call it sig=sin(xt); you have a gain control signal (freq not important), but some stupid reason we want gain control to be a sine ctrl=sin(yt). (All signals are sums of sines and cosines -- we are just choosing a degenerate signal). SO -- lets do gain control... Gain control is multiplying -- like if you want to halve the signal you multiply the signal by 0.5 -- got it?

Now -- assume we have the signal above 'sig', and we mutiply it by the gain control signal 'ctrl'. You'd think that the result would just be a bigger or smaller 'sig' right. In a way it is, but actually you are distorting the shape of the 'sig' by multiplying it. The real answer is instead of something just based upon 'sin(xt)' for the frequency would be 'x' based (kind of).. Buuuttt noooo!!! The real output frequncies are like this: 0.5 * (cos(xt - yt) - cos(xt + yt)). So, your output frequency isn't just (x) based like sin(xt)), but also has the frequency 'y' in it also. Those are called 'sidebands' in casual parlance. That is -- the frequencies REALLY DO NOT STAY THE SAME. Too much of that frequency mixing, then the ears start peceiving it as distortion. The really bad thing is not only does the signal mix with the control signal, but the signal can mix with itself -- and that is commonly called intermodulation. In fact, the mix of the gain control and the signal is called 'modulation', but "I" (me) usually call it all 'intermodulation'. Modulation sounds to me like something that one wants, 'intermodualtion' is evil :).

So -- even though I gave an imprecise description of what is going on -- you get the idea -- it is the same whether you use a sin wave for you control signal or a big mix of sines and cosines (that is a pulse or fast attack control signal), all of the frequencies mix together producing SPLAT!!!
:).
John

cyrano Wed, 05/23/2018 - 09:33

This is amazing, John.

I just found the time to listen to the 3 "SOS" versions. The ml version sounds a lot better than both the original and the previous decode.

BTW, "asgoodasnew" seems to have some level difference between original and decoded version.

Need to do further listening, but it seems this could at least solve the problem for my hyperacusis, which lead me to put more than half of my CD collection in storage. Maybe I'll re-rip them and be able to listen to them again.

And when I think how a major problem like this remained undiagnosed for this long, I wonder how bad the mp3 format really is. It looks like Fraunhofer did a good job after all, but the record companies just didn't care when they released CD's. Maybe nobody listened? Or wasn't anybody able to discriminate?

John S Dyson Wed, 05/23/2018 - 11:20

cyrano, post: 457099, member: 51139 wrote: This is amazing, John.

I just found the time to listen to the 3 "SOS" versions. The ml version sounds a lot better than both the original and the previous decode.

BTW, "asgoodasnew" seems to have some level difference between original and decoded version.

Need to do further listening, but it seems this could at least solve the problem for my hyperacusis, which lead me to put more than half of my CD collection in storage. Maybe I'll re-rip them and be able to listen to them again.

And when I think how a major problem like this remained undiagnosed for this long, I wonder how bad the mp3 format really is. It looks like Fraunhofer did a good job after all, but the record companies just didn't care when they released CD's. Maybe nobody listened? Or wasn't anybody able to discriminate?

short comments first:
First -- thanks for the input. I read/listen to what EVERYONE says and try to understand how to do better -- in all ways about life and my work/hobby. I appreciate and ENJOY the feedback more than you might even imagine :).

I think that the undecoded DolbyA leakage wasn't detected as such because of the relative lack of experience with DolbyA (sure, there were experts out there also, but DolbyA has a different character when processing compressed material and also has different character with differing HF content.) It is difficult for me to distinguish DolbyA vs. emphasis in some cases -- a lot of times I have to try the decoder to find out, and even then the decoder isn't totally unforgiving. There is still folklore that DolbyA might sound as bad as DBX (or at least that seems like the sentiment to me), but how bad undecoded DolbyA sounds is VERY VERY dependent on the material and the signal level. I have seldom (never) heard DolbyA encoded signal that sounds anywhere nearly as bad as DBX encoding, and if I don't have a DolbyA decoder handy, I will use a 'tone control' of some kind, and make myself happy with it (begrudgingly.) If anyone tells me that they used a tone control to make undecoded DBX tolerable -- well, that is a very special person who said that :).

longer comment:
WRT mp3 -- I think that it is amazing especially since it has lasted so long and really does work reasonably well. I can probably reproduce some examples again where on the margins AT HIGH BITRATES, that opus does slightly better than mp3, but there are probably some cases in the reverse. I use a compressed (in the sense of data storage size) audio format for almost everything, where I try to use a lossless format when I have enough space and bandwidth, and my favorite lossy format to use is opus when I have the freedom. But, I want to be compatible with the 'hightail' service so that people can play the demos without an explicit download step, and they apparently dont support directly playing opus files. So, I normally use the 'standard' preset on lame to produce my examples, and if I want to provide a slightly messed up version for some reason, then I use the 'medium' preset or custom brew a worse one yet. I can immediately tell the difference between the two (standard/medium) presets, but even the slow one is still good. MP3 just cannot keep up with opus on the very high end and the very low end (IMO), where music can be listenably compressed by opus at 64k or less -- and still sound reasonably ok. I am amazed (even though I mathematically understand) with the compression technology (incl lossless.)

Back to mp3 -- even though I might seem like I 'love' opus and diss mp3, actually I have a bit more respect for the original mp3 developers because they did a wonderfully good job first super-successful publically available technology. I remember when I was working at Bell Labs, and was amazed at the demo tapes that we got for the audio compression (like mp3, but more primitive.) Was wonderful to think that a reasonable FM quality audio stream could be sent down 2B (128k total) channels on an ISDN line -- that seemed impossible at the time!!! It took real vision and real inventiveness to do mp3 as well as the developers & designers did do.

The only really difficult-to-notice problem that I have heard with mp3 (alll other problems tend to be very, very obvious to me) is that sometimes nearly time coincidental material that almost matches itself (like a 10msec delay and add -- shorter than that, however), MP3 will tend to 'disappear' part of one of the mutually similar copies... Opus tends to 'disappear' them less, while -- of course -- flac is perfect.

Is MP3 good enough for my 'scientific' purposes? depends... If it is something that might be encoded/decoded multiple times in sequence, then MP3 is definitely NOT good enough even at 320kHz SR, but neither is opus, even though it will probably do better.
Is MP3 good enough for some of the 'endpoints' of my testing? Well, it is certainly best to start with a true, non-compressed copy -- but most processing is tolerant of MP3/Opus in most ways. I ALWAYS use flac (24bits) when I can -- and when space allows. The 'minor' problem with flac for 'scientific' purposes is that it is not floating point.

John

John S Dyson Wed, 05/23/2018 - 16:42

Slightly off topic (and on topic both.) I very temporarily put up a copy of a few files from the Hollywood Records Queen that is DolbyA encoded. I have a pre-decoded file: Bicycle-orig.mp3, the decoded version WITHOUT ANY SWEETENING: Bicycle.mp3, and a compressed/limited version with my very intermod minimizing semi-stealth compressor: Bicycle-complim.mp3. Also, there are some ABBA examples, like a finalized (cleaned up and DolbyA decoded) SOS and Waterloo. The ABBA versions are highly processed (in a nice, clean, positive way), and VERY pretty for ABBA. (Now crunch in the sound -- Aphex Exciter cr*p removed, DA decoded, removal of the grinding sound -- which I just figured out what it is.) Nice and pure ABBA (as pure as it can get. I might put another ABBA up also in a few minutes -- hard to choose.

These are going to disappear poste-haste -- shouldn't do this, but for a few hours (probably late tomorrow will delete), it might be subject of conversation.

Repo:: https://spaces.hightail.com/space/ko2yTjF5YY

John

John S Dyson Wed, 05/23/2018 - 17:31

kmetal, post: 457103, member: 37533 wrote: I found the play button, by scrolling down. Who would’ve thought... On my phone/blue tooth speakers I would say the bycle orig and the decoded sound fairly close, with the decoded version exhibiting less garble. The reverb/delay trail on the vocals also seemed a little clearer.

The difference is admittedly not amazingly extreme, but should expect less high end and when the levels drop a little -- you should notice less level compression. In the case of some recordings, listening very carfefully, there should be less hiss. I don't knoe if Bicycle has hiss per se, however.

Perhaps the most important thing is that the HF/LF balance is changed from dynamic to fixed, so a simple tone control can let you CORRECTLY tune the HF/LF balance and leave it there. Also, the 'density' of the music should decease a little (that is the compression.) however, much of the Queen stuff is fairly compressed already, so the DolbyA compression isn't quite as obvious...

I am starting to ramble, but if there is any other kind of music (perhaps more mellow) you'd like to see demoed, then the DolbyA difference should be a little more noticeable. I am willing to search though my archives for another example if I know the kind of genre that you are interested in.

If DolbyA was the total devil (like DBX sounds), then it wouldn't have leaked out. But -- it is almost listenable.

John

John S Dyson Wed, 05/23/2018 - 19:33

New version of decode again. Works better -- slower though. Smoother sound, esp at --ai=vhigh and --ai=max. --ai=max works so well I am thinking about wiring it as the default!!!

filename: da-win-23may2018B.zip
location: https://spaces.hightail.com/space/tjUm4ywtDR

The above location tends to be where I deposit the latest version. I'll use that until I can pull together the resources to get a more permanent repository for the program releases.
Running the program is fairly simple, but it is quite slow (on the order of realtime on my computer.) Earlier versions were VERY fast, but didn't sound as good.

Tell me if any troubles. Give me ideas for improvement. I don't know much about GUI programming anymore, so we are stuck with command line until I can figure out which plugin structure or a good GUI setup that can be used.

John

John S Dyson Fri, 05/25/2018 - 20:24

I have a pointer to a 'religious' experience for ABBA lovers. I am running my repository of ABBA again, and had a Eureka moment with how to process the music.

THIS DEMO WILL BE A RELIGIOUS EXPERIENCE FOR ABBA LOVERS -- and this is from a street copy -- I am amazed as to the transformation myself. ZERO compressed sound -- very pretty. Unfortunately it would be illegal to share the entire thing -- I am keeping the demo up for only 36Hrs and then must pull it down.

I will document the decoding procedure -- but the main step is the very low distortion DolbyA decoder that I have been talking about. Once you have the formula, you can process your own copy!!! I really hesitate to demo with such complete songs, but THIS NEEDS to be communicated!!!

REALLY -- this is worth listening to: https://spaces.hightail.com/space/UntM4LCdcm

John S Dyson Fri, 05/25/2018 - 20:41

John S Dyson, post: 457010, member: 50354 wrote: There is a new version (again) on the repository -- it is about 20% faster on AVX capable machines. I haven't uploaded a Linux version yet (just Windows), will do Linux soon, but the Linux version always seems to run better than the Windows version anyway. The up-to-date file has 15may in the filename. I should upload the Linux version in a few hours (probably should have done the Linux version first -- Linux is the environment where I do development -- it is faster/more fluid.)
Other than speed, the biggest technical difference is that the --ai=max mode is a bit more aggressive yet at removing the intermod. If you wanna hear how bad the intermod sounds -- try --ai=none --raw=8192 for the switches. It disables all of the special intermod handling. You'll notice that the sound is more 'crisp'. It has that 'digital processing' sound, and it is mostly due to to two things: direct mixing of the gain control signal with the audio (but the gain control signal has a wide spectrum, so it splats sharpness all over), and also the sidebands mixing with the sample rate -- causing even more aliasing. That kind of sound is even worse sounding because of the wrap around to lower frequencies. My decoder removes by far most of the mathematically unnecessary parts of the intermod (esp the --ai=high and --ai=max modes.) The default --ai=med mode is less aggressive -- doing fewer additional special operations, so there are a few more intermod products left in. Eventually --ai=high will likely be the default mode. --ai=max is mostly meant for very hard sounding music that has lots of dynamics and high frequencies, where the minor disadvantages of --ai=max are less bad than the intermod itself.

John

I have uploaded a demo of the output of the decoder (the decoder was the first phase of the processing.) For fans of a 'certain' group -- it will be a religious experience. For people who know the group, but who are not fans might just become fans. The sound is something that hasn't existed outside of the studio -- I applied some other uncommon techniques to recover the orginal sound quality. I was myself amazed. It is worth 10minutes of your time. There are two songs -- I might upload one more, but it must be taken down in a short while -- I intend to document the process that I used so that everybody can benefit.

THIS REALLY IS WORTH A 10MINUTE LISTEN.

https://spaces.hightail.com/space/UntM4LCdcm

x

User login