Skip to main content

Greetings -

I truly don't understand cores vs speed (and, for that matter, the influence that cache can/does have).

Let's take two processors, a 65w X2 3.0GHz Regor 250 and a 45w X4 2.3GHz Propus 605e. And let's ignore the $100 price difference.

Is one likely to be better than the other for recording/processing digital audio?

Or is it entirely (or almost entirely) program dependent?

Insight most welcome.

Thanks.

- Richard

Comments

jammster Thu, 12/31/2009 - 10:05

expatCanuck wrote: Is one likely to be better than the other for recording/processing digital audio?

Or is it entirely (or almost entirely) program dependent?

I use a Macbook, 2Ghz Core 2 duo (667mhz Ram) running Logic Studio 7/8/9.

My observation has been mostly that the processor speed governs the throughput. That is, total track count and realtime processing. Keep in mind, a good DAW will let you "save" a track to disk, thereby freeing the processor to do other tasks.

I would guess processor speed and ram speed govern throughput of your recording software, but this depends on how the software is utilized in the cpu.

The multiple cores will really help with all of the realtime processing, especially if you use lots of virtual instruments and/or plugins.

hueseph Thu, 12/31/2009 - 10:40

Those are both AMD processors. The processor speed is per core. That is, an x2 has two cores at 3.2GHz each in this example and an x4 is four cores at 2.3GHz each. So it's like having a 6.4GHz processor for the x2 and a 9.2GHz processor for the x4. This isn't exactly true since the processor speed is not linear so you can't just add up the core speeds.

The thing you will gain with more cores is better performance when multitasking. You can assign affinity to cores. In other words you can set Word to work with core #1 Excel to work with core #2 etc.

The cores will work together providing that the program supports it.

Cache is a means to prevent bottle necking. Cache is a temporary space where the processor "holds" data while it is processing. So, the more cache, the smoother the throughput. Now a faster processor will affect this as well since the faster a processor can process information the faster the throughput as well.

Then factor in how the processor processes the information and there's a whole new can of worms. That's where the Intel versus AMD comes into play.

TheJackAttack Thu, 12/31/2009 - 10:57

http://www.cpubenchmark.net/index.php

Processor speed is very misleading and depends not only on number of cores but the type of architecture as well. There a couple of Core 2 Duos for instance that outperform most quad core processors. Then you throw in the motherboard and hope it was well designed. At this date (Dec 2009) Intel is king though it was only a short five or so years ago that AMD had those bragging rights. The future of course remains to be seen.

djmukilteo Thu, 12/31/2009 - 11:41

expatCanuck wrote: Greetings -

I truly don't understand cores vs. speed (and, for that matter, the influence that cache can/does have).

Let's take two processors, a 65w X2 3.0GHz Regor 250 and a 45w X4 2.3GHz Propus 605e. And let's ignore the $100 price difference.

Is one likely to be better than the other for recording/processing digital audio?

Or is it entirely (or almost entirely) program dependent?

Insight most welcome.

Thanks.

- Richard

All computer CPU's do basically the same thing when they are operating.
They move data (16 bits...32bit...64bits) from memory (RAM) and move that data back and forth to various input/output devices like HDD, printer, video, mouse, keyboard.

Everything a CPU does has to be moved into and out of RAM. The CPU "cache" is like RAM memory and temporarily stores that data from RAM until the CPU is ready to move it into internal registers also contained within the CPU so it can execute the code of instructions contained in the data.

The CPU will have a fixed clock speed at which it can execute 1 instruction cycle. (i.e 2.3Ghz). Every type of instruction executed within a CPU requires different clock cycles, some instructions require 4 clock cycles to complete, some take many many clock cycles (x/2.3Ghz). Everything in a CPU takes time and managing time is it's primary function and logic.....it may only be nanoseconds of time but time none the less and it's pretty good at it.
The faster the CPU clock speed, the faster it can execute and complete those instructions.(time)

All input/output components attached to the RAM are on a big bus and are controlled as to when they can read/write data onto the bus into the RAM and when they need to wait (time)...the CPU then decides when and who it takes the data from in RAM cache's it and begins executing the code.(time) HDD have a certain speed. (time) RAM has a certain speed.(more time) So you have a cache to store data temporarily coming from these or going to these devices. (managing time).
So what I've just described is one single core CPU.

Now we move to dual core or multi core processors....each core can do what was described above independently from the other core more or less...they are interconnected internally using very short paths and very high speeds....this allows an architecture of managing time even faster and much more efficiently (sometimes). With multi core CPU's you now have a separate core to handle different areas of RAM, different devices, different processes and different applications (multi-time).
Having 4 cores begins to become very fast for processing complex applications and running your hardware.....having lots and lots of very fast RAM helps keep up with these fast CPU cores...having a motherboard data bus that is nice and wide like a 64/128bit (lanes like a freeway) and can move that data around nice and fast say 2Ghz really speeds things up as well, having larger CPU cache's also helps each core get their data in and out of RAM quickly and smoothly with a minimum of delay (time, time, time=throughput)...
Hope that helps

Codemonkey Sat, 01/02/2010 - 15:17

Prepare for a long post. This is from the point of view of a programmer and won't directly answer your question, but it should clear up things a little (with luck):

DEFINITIONS
Software does Tasks. Each Task needs to run on a CPU Core. If the software splits up the Tasks into separate "Threads" then you can run one Thread per Core. If the software doesn't split up the Tasks into Threads, you get no benefit from multiple Cores**

**there's a small benefit because Windows can do it's own Tasks separately from the audio software.

For audio, multi-Threading the Tasks is doable without making the program unstable. Therefore multiple Cores can help get a performance gain out of doing the many Tasks. However it depends on the software.

KEY POINTS
The faster the Core is, the faster the Task gets done.
The more Cores there are, the more Threads can be in progress at once.

PROBLEMS
Tasks, after being completed, must be synchronised. Imagine three people doing separate bits of work, the synchronising is what you call it when they talk to each other about the work done that day (i.e. meetings).

So, the Tasks have Overhead. The Overhead can be complex depending on the Task and number of Threads.

Since it's not economically feasible to have and run a single 8GHz Core which would do all your Tasks quickly, it makes more sense to have 4x2GHz Cores (4 is overkill for small audio projects; 2 is generally seen as sufficient, but I don't build audio PCs). So now you can do up to 4 Tasks simultaneously. The Overhead is what stops a 2GHz Dual Core being the same as a 4GHz Single Core because it basically stops all the Threads doing Tasks (which admittedly does free up the Cores to do other Threads such as Windows' own Tasks).

THREADS vs CORES
Only one Thread can be running on one Core at a time. Therefore more Threads than Cores can introduce too much Overhead (slowdowns caused by the talking time: i.e. meetings), and you gain almost nothing because the Threads doing Tasks have to wait on the previous Thread finishing.

Some systems, I'm sure, will let another Thread run it's Tasks if the first Thread is waiting on a hard disk or something that takes a long time. I'm not 100% on how often this happens.

CONCLUSION
Multi-Threads => several Tasks at once, with more overhead.
Single-Threads => one Task at once, with less overhead.

EXTRA INFO
A single Task is basically: (1) read the audio data from appropriate part of the File for the small timeslice we're going to process (2) pass it through each Effect on the Track.
Once you've got all the tracks processed for that small timeslice, you can merge them. The Task so far can be done in multi-Threads. But now you need to synchronise (Overhead) and mix the tracks down to the master, and send that to the output device.

Depending on the number of Effects and Cores, and the way the Software is designed to work, there could be any number of Threads. Personally I'd have one Thread per Core, each Thread processes the Tasks for a share of all the Tracks in the song, then mix them.
Simpler software designs might have one Thread do every Task one after the other, which is the old design when Pentium 4s were all the rage.

Unfortunately I know nothing about what software uses what designs, or I could tell you that if you use software X then get CPU Y, or software Z then you are better to get CPU Q.

Whew... hello? Hello? OI! WAKE UP! And remind me never to become a lecturer.

anonymous Tue, 01/05/2010 - 13:19

Codemonkey wrote: Prepare for a long post. This is from the point of view of a programmer and won't directly answer your question, but it should clear up things a little (with luck):

DEFINITIONS
Software does Tasks. Each Task needs to run on a CPU Core. If the software splits up the Tasks into separate "Threads" then you can run one Thread per Core. If the software doesn't split up the Tasks into Threads, you get no benefit from multiple Cores**

**there's a small benefit because Windows can do it's own Tasks separately from the audio software.

For audio, multi-Threading the Tasks is doable without making the program unstable. Therefore multiple Cores can help get a performance gain out of doing the many Tasks. However it depends on the software.

KEY POINTS
The faster the Core is, the faster the Task gets done.
The more Cores there are, the more Threads can be in progress at once.

PROBLEMS
Tasks, after being completed, must be synchronised. Imagine three people doing separate bits of work, the synchronising is what you call it when they talk to each other about the work done that day (i.e. meetings).

So, the Tasks have Overhead. The Overhead can be complex depending on the Task and number of Threads.

Since it's not economically feasible to have and run a single 8GHz Core which would do all your Tasks quickly, it makes more sense to have 4x2GHz Cores (4 is overkill for small audio projects; 2 is generally seen as sufficient, but I don't build audio PCs). So now you can do up to 4 Tasks simultaneously. The Overhead is what stops a 2GHz Dual Core being the same as a 4GHz Single Core because it basically stops all the Threads doing Tasks (which admittedly does free up the Cores to do other Threads such as Windows' own Tasks).

THREADS vs CORES
Only one Thread can be running on one Core at a time. Therefore more Threads than Cores can introduce too much Overhead (slowdowns caused by the talking time: i.e. meetings), and you gain almost nothing because the Threads doing Tasks have to wait on the previous Thread finishing.

Some systems, I'm sure, will let another Thread run it's Tasks if the first Thread is waiting on a hard disk or something that takes a long time. I'm not 100% on how often this happens.

CONCLUSION
Multi-Threads => several Tasks at once, with more overhead.
Single-Threads => one Task at once, with less overhead.

EXTRA INFO
A single Task is basically: (1) read the audio data from appropriate part of the File for the small timeslice we're going to process (2) pass it through each Effect on the Track.
Once you've got all the tracks processed for that small timeslice, you can merge them. The Task so far can be done in multi-Threads. But now you need to synchronise (Overhead) and mix the tracks down to the master, and send that to the output device.

Depending on the number of Effects and Cores, and the way the Software is designed to work, there could be any number of Threads. Personally I'd have one Thread per Core, each Thread processes the Tasks for a share of all the Tracks in the song, then mix them.
Simpler software designs might have one Thread do every Task one after the other, which is the old design when Pentium 4s were all the rage.

Unfortunately I know nothing about what software uses what designs, or I could tell you that if you use software X then get CPU Y, or software Z then you are better to get CPU Q.

Whew... hello? Hello? OI! WAKE UP! And remind me never to become a lecturer.

TMI!!!

anonymous Tue, 01/05/2010 - 13:25

expatCanuck wrote: Greetings -

I truly don't understand cores vs. speed (and, for that matter, the influence that cache can/does have).

Let's take two processors, a 65w X2 3.0GHz Regor 250 and a 45w X4 2.3GHz Propus 605e. And let's ignore the $100 price difference.

Is one likely to be better than the other for recording/processing digital audio?

Or is it entirely (or almost entirely) program dependent?

Insight most welcome.

Thanks.

- Richard

I don't even know what those CPU's are...

a Quad Core with a lower CPU frequency (2.3)
WILL smoke the dual core which has a higher frequency (3.0) Usually...

Obviously if you are using primitive software....the software wouldn't see multi-core CPU's....so the Dual Core would win....

Most ALL of the DAW programs out there (if not all) support multi-core processors, in which case CPU frequency loses out to a Quad core much like how a single core CANNOT compete with a dual core CPU.

It would help if we knew what you are looking to buy based on these figures....

This is just personal experience/preference talking, but:
I wouldn't touch AMD.

Intel i7 is the way to go.