I've kind of been wondering, with processors running 2, 4, even 8 threads, what processor is the most powerful on a per thread basis. As in, if you limited all the processors to only use one thread at a time, which one would come out on top?
Aren't they using the same cores for each processor line?
Anyways, do they still have the same cache and clock speed?
I guess a better way to put it, lets say you have a single, very CPU intensive program that will only use one thread, it has no multithreading whatsoever. Ignoring a need to run background processes and an OS, what chip would run the program the best? (I'm trying to come up with a scenario where you would have one thread at 100% and all others at 0%)
I guess a better way to put it, lets say you have a single, very CPU intensive program that will only use one thread, it has no multithreading whatsoever. Ignoring a need to run background processes and an OS, what chip would run the program the best? (I'm trying to come up with a scenario where you would have one thread at 100% and all others at 0%)
For a single thread/single core probably the i7 extreme, overclocked. Maybe the highest xeon.
So is cost the only reason it isn't suggested here very often? I would assume if each core could do more, faster, it would be better for gaming since most only use 1 or 2 threads still.
So is cost the only reason it isn't suggested here very often? I would assume if each core could do more, faster, it would be better for gaming since most only use 1 or 2 threads still.
Gaming is another beast entirely.
Cost:performance and the performance difference between it and the lower end processors for gaming is why we don't recommend it.
for cost:performance, the i5 and i3 are still king. They give similar performance to the i7 extreme for hundreds of $ less. A very mild overclock (~4.0 ish) on a 2500k is par with the i7 ex. at stock.
Since most builds are for gaming, you don't need an i7 extreme, gaming is NOT CPU intensive in the least.
It would NOT be better for gaming because games at most use 1-2 cores/threads, RARELY 4. Because of this, you go to per-core performance, in which case for gaming the i7 extreme is nearly identical to the i3, the only difference being cache and clock speed which have a non-noticeable impact in performance.
The only reason to get an i7 extreme, or an i7 in general is if you do video rendering/3D rendering for a job. Though the xeons would be better for these cases. If number crunching or compiling code, go FX no question.
Really, the i7s serve no purpose right now. There is no reason to get one at all. The i3 and i5 are better value for gaming for the same performance, the xeons are better for 3d/video rendering for the same performance and cheaper (usually), and the FX chips are not only FAR cheaper, but absolutely blow all i* series processors out of the water for number crunching, calculations, and compiling code.
It's not about cost, it's about a lot of factors, all of which have the i7 at the bottom of the list/failing at all of them.
So is cost the only reason it isn't suggested here very often? I would assume if each core could do more, faster, it would be better for gaming since most only use 1 or 2 threads still.
Expect that just because a CPU is good at number crunching does not make it good at running games.
This is the flaw of benchmarks like superpi or y cruncher. The best way to compare CPUs is over all and what the person is using it for.
I mean here is a benchmark that just tests number crunching ability.
Ok the 1100T is the best CPU end of story.
It is just as flawed to look at a handfull of games.
See once again the 1100T reigns supreme over the I5 2500k.
Obviously the 1100T is not the best and just looking at singlethread performance is a flawed idea too.
Well the 2500 is def an upgrade for anyone with a 1100T.
Wait guess its not.
You just can't base performance on 1 metric you just don't buy houses just because it has a lot of bathrooms.
Also for people who don't remember during the Pentium 4 AMD was ***** slapping Intel in power consumption and performance and over all better technology. However the p4 turned around if you used SSE2 instructions and a lot of benchmarks were optimized and used them however most software does not. The P4 was technically faster at that but in the real world software is not perfect.
This is really confusing me now. I thought that when you got down to it, processors did nothing but crunch numbers at the very basic level. Why does it matter whether those numbers give you the next digit in pi, or what your speed is in the next frame of your game?
This is really confusing me now. I thought that when you got down to it, processors did nothing but crunch numbers at the very basic level. Why does it matter whether those numbers give you the next digit in pi, or what your speed is in the next frame of your game?
Because when you are doing a lot of repetitive small calculations it can sit in the L1 cache and not stress the memory or the branch prediction and uses micro ops so the decode does not get stressed as much. This might not be exact I am not a chip engineer but this is the jest of it and I am sure anyone who actually bothered to look this up i am going from pure memory. You also have software that uses different instructions to talk to the CPU SSE AVX XOP MMX.
Why is an intro to programming class trying to explain to you how hardware works?
Most programmers don't know anything about hardware.
Most hardware gurus don't know anything about programming.
Honestly I have no real clue, I think he was trying to show us why programs should be efficient, or something, he never really gave a reason on why. I was mostly taking the class as another elective.
Apparently someone took the time to make an animation about the instructor. And yes, some of the stuff we were taught was just flat out wrong. (and he really did hate questions)
Currently, I really think the i7 EE is a super niche...for consumers at least, I mean, even for video editing the 3770k might prove almost as fast due to quick sync...
Quick sync has some image quality issues for say YouTube videos who cares but professionals it matters.
http://pcpartpicker.com/user/SteevyT/saved/21PI
i3 if you count it. Or Pentium G860
4 cores?
i7 if you count it. Or i5 3570k
8 cores?
Not sure.
Now limit each processor to running only one thread, which one of them would be the most powerful?
http://pcpartpicker.com/user/SteevyT/saved/21PI
Aren't they using the same cores for each processor line?
Anyways, do they still have the same cache and clock speed?
I guess a better way to put it, lets say you have a single, very CPU intensive program that will only use one thread, it has no multithreading whatsoever. Ignoring a need to run background processes and an OS, what chip would run the program the best? (I'm trying to come up with a scenario where you would have one thread at 100% and all others at 0%)
http://pcpartpicker.com/user/SteevyT/saved/21PI
Game? Math? Number crunching? Rendering?
There isn't just one kind of "CPU intensive".
Sorry, I assumed that everything the CPU did was based around math. How about calculating digits of pi? Seems like a nice simple starting point.
http://pcpartpicker.com/user/SteevyT/saved/21PI
So is cost the only reason it isn't suggested here very often? I would assume if each core could do more, faster, it would be better for gaming since most only use 1 or 2 threads still.
http://pcpartpicker.com/user/SteevyT/saved/21PI
Cost:performance and the performance difference between it and the lower end processors for gaming is why we don't recommend it.
for cost:performance, the i5 and i3 are still king. They give similar performance to the i7 extreme for hundreds of $ less. A very mild overclock (~4.0 ish) on a 2500k is par with the i7 ex. at stock.
Since most builds are for gaming, you don't need an i7 extreme, gaming is NOT CPU intensive in the least.
It would NOT be better for gaming because games at most use 1-2 cores/threads, RARELY 4. Because of this, you go to per-core performance, in which case for gaming the i7 extreme is nearly identical to the i3, the only difference being cache and clock speed which have a non-noticeable impact in performance.
The only reason to get an i7 extreme, or an i7 in general is if you do video rendering/3D rendering for a job. Though the xeons would be better for these cases. If number crunching or compiling code, go FX no question.
Really, the i7s serve no purpose right now. There is no reason to get one at all. The i3 and i5 are better value for gaming for the same performance, the xeons are better for 3d/video rendering for the same performance and cheaper (usually), and the FX chips are not only FAR cheaper, but absolutely blow all i* series processors out of the water for number crunching, calculations, and compiling code.
It's not about cost, it's about a lot of factors, all of which have the i7 at the bottom of the list/failing at all of them.
Expect that just because a CPU is good at number crunching does not make it good at running games.
This is the flaw of benchmarks like superpi or y cruncher. The best way to compare CPUs is over all and what the person is using it for.
I mean here is a benchmark that just tests number crunching ability.
Ok the 1100T is the best CPU end of story.
It is just as flawed to look at a handfull of games.
See once again the 1100T reigns supreme over the I5 2500k.
Obviously the 1100T is not the best and just looking at singlethread performance is a flawed idea too.
Well the 2500 is def an upgrade for anyone with a 1100T.
Wait guess its not.
You just can't base performance on 1 metric you just don't buy houses just because it has a lot of bathrooms.
Also for people who don't remember during the Pentium 4 AMD was ***** slapping Intel in power consumption and performance and over all better technology. However the p4 turned around if you used SSE2 instructions and a lot of benchmarks were optimized and used them however most software does not. The P4 was technically faster at that but in the real world software is not perfect.
http://pcpartpicker.com/user/SteevyT/saved/21PI
Because that isn't how a CPU works. Not everything is number crunching.
And now I'm seeing why that intro to programming class was such a joke among the actual CS majors....
Edit: Am I thinking of an ALU?
http://pcpartpicker.com/user/SteevyT/saved/21PI
Because when you are doing a lot of repetitive small calculations it can sit in the L1 cache and not stress the memory or the branch prediction and uses micro ops so the decode does not get stressed as much. This might not be exact I am not a chip engineer but this is the jest of it and I am sure anyone who actually bothered to look this up i am going from pure memory. You also have software that uses different instructions to talk to the CPU SSE AVX XOP MMX.
Most programmers don't know anything about hardware.
Most hardware gurus don't know anything about programming.
Honestly I have no real clue, I think he was trying to show us why programs should be efficient, or something, he never really gave a reason on why. I was mostly taking the class as another elective.
Apparently someone took the time to make an animation about the instructor. And yes, some of the stuff we were taught was just flat out wrong. (and he really did hate questions)
http://pcpartpicker.com/user/SteevyT/saved/21PI
Quick sync has some image quality issues for say YouTube videos who cares but professionals it matters.