I was wondering if an i9 9900k or an i9 9980xe gets you a higher framerate in the game, beside their price difference.[/p]
[p]It would be combined with an RTX 2080 Ti and 128gb of RAM.[/p]
Rollback Post to RevisionRollBack
extreme sites list - place with list of best rape\incest sites and other extreme xxx movies
Would completely depend on what you plan on doing setting wise with the i9 9980xe. Its out of the box settings would most likely be beat by the i9900k because Java Minecraft runs on a single core and generally speaking benefits more from the higher clocks speeds. Minecraft would not benefit from the additional cores of the i9 9980xe. You could in theory tweak the settings of the i9 9980xe to change that though.
This is far more than enough processor, graphics card, and RAM for vanilla minecraft...with high-end shaders, large render distance, and HD resourcepacks. Could probably throw in a bunch of not-well-written datapacks, as well.
You would also be able to easily play all the largest Forge modpacks with only moderate stuttering "out of the box" (with some tinkering I suppose you could get better results), but on the Fabric side there'd be essentially no loading time at all (seriously, it probably loads even faster than TMC's mod and he's already fixed 90% of the things wrong with vanilla) even if you added every single Fabric mod ever put out (minus those that are incompatible, of course).
Not since 1.3.1, and especially since 1.8, even when only referring to the client (in multiplayer, as singleplayer runs an integrated server on a separate thread, in addition to a file I/O thread and several other minor threads - and of course, the JVM itself, including garbage collection, the main performance bottleneck for Java applications, unless you are using REALLY outdated JVM arguments or Java version, as parallel/concurrent GC has been the default for a long time):
Rendering & performance
Each dimension (Overworld, Nether, End) run on separate threads.
This makes it so that the performance in one dimension, is independent of the performance in all others. Chunk rendering and chunk rebuilds are now multi-threaded to speed them up.
Mob pathfinding is now multi-threaded, to alleviate previous slow-downs associated with it.
Likewise, the creator of Optifine says you need at least a quad-core CPU to get decent performance for 1.8+ and even older versions would run best on a quad-core CPU (my old computer had a dual-core CPU and Optifine's multicore rendering setting caused lag spikes similar to vanilla 1.8+; unfortunately, Optifine removed this setting in 1.8 so it always uses the multithreaded option, then again, the old default would probably give even worse performance due to how much more expensive chunk updates have become due to the complexity of block states and blockpos and all that):
The multithreaded chunk loading is crude and it will need a lot of optimizations in order to behave properly. Currently it works best with multi-core systems, quad-core is optimal, dual-core suffers a bit and single-core CPUs are practically doomed with vanilla. Lag spikes are present with all types of CPU.
That said, as mentioned above vanilla's multithreaded implementation can't be that good since chunk updates still cause lag spikes, which shouldn't happen if they were truly independent of the main render thread (which itself should only be telling the GPU what to render, it also possibly only multithreads chunk updates, as opposed to entity rendering, which is expensive since their entire model must be re-rendered every frame):
This is not a new issue, as suggested by the affected versions, but rendering has become more expensive due to internal changes; for example, even my highly optimized rendering code (single threaded) is highly bottlenecked by OpenGL itself, where it takes about 25 milliseconds to render a chunk section of leaves on Fancy, which is by itself a lag spike down to 40 FPS, from 1000+ on 8 chunks and 500+ on 16 chunks (even a worse-case situation like a Mega Forest still gets around 375 at max settings). Only about 10% of that time is in Java code, the other 90% is in calls to OpenGL to compile the data, and this is probably where 1.8+ multithreads things, but the Java code can still take too long, causing lag spikes:
I noticed in 1.12.x that getBlockState (in World, Chunk, and ChunkCache) accounted for substantial amount of CPU overhead. I developed a block state cache (write-through direct-mapped cache using a specially tuned hash to map from coordinates to cache entries), which made a HUGE difference. That plus a BlockPos neighbor cache literally doubled Minecraft performance for the test cases we tried. The laggiest one was TT's jungle tree farm. In Vanilla 1.12.2, it starts out at about 18TPS, although after some JIT-ing, it rises to about 35TPS (based on the reciprocal of the time spend not sleeping between ticks). These caches increase it to about 70TPS.
In 1.13, I knew that getBlockState was going to get more expensive, at lease because of the extra liquid layer, but I had my concerns about block numbers being fully abstracted (the flattening and all that). This was a performance problem for 1.12.x. It's going to be really serious in 1.13.
Less than quad core CPUs work fine still; my Core 2 Duo E8600 and Core i3 4010U (okay, it has Hyper-threading for four threads, but it's also a paltry 1.7 Ghz low wattage dual core laptop CPU) play even modern versions relatively fine. They won't meet the demands I have when playing on my desktop with very far render distances, obviously (but the GPU also comes to play there), but they work for more "normal" expected cases. The game indeed made great strides with taking advantage of extra cores for certain things (other dimensions, chunk load, etc. I guess?), but the main gameplay loop is still ultimately going to be something that is determined more by raw performance per core.
That said, I had a Core i5 2500K overclocked. It was aged but had good IPC still. I recently upgraded to a Ryzen 7 3700X (which I am currently not overclocking). While it's faster per core, it's not worlds faster in IPC than the older CPU since Intel was just so good there back then (and IPC has gone up ever slowly since Sandy Bridge). Yet, the improvement in chunk loading speed and frame rate maximum/smoothness at higher render distances (over 32... when it works that is so only older versions) surprised/impressed me. If modern versions actually rendered beyond 32 with consistency, I think 48 chunks would be realistically playable (at least, it was in 1.10.2 when I tested, when the older CPU started giving up more and more past around 36 or 38 to maybe 42 at best).
That said, any modern Ryzen (especially Zen 2) or recent Intel (Skylake plus) quad core (ideally at good clock speeds and with SMT) will be absolutely fine enough for this game even if you're pushing it. If you're not, you can get by on far less.
Not since 1.3.1, and especially since 1.8, even when only referring to the client (in multiplayer, as singleplayer runs an integrated server on a separate thread, in addition to a file I/O thread and several other minor threads - and of course, the JVM itself, including garbage collection, the main performance bottleneck for Java applications, unless you are using REALLY outdated JVM arguments or Java version, as parallel/concurrent GC has been the default for a long time):
Likewise, the creator of Optifine says you need at least a quad-core CPU to get decent performance for 1.8+ and even older versions would run best on a quad-core CPU (my old computer had a dual-core CPU and Optifine's multicore rendering setting caused lag spikes similar to vanilla 1.8+; unfortunately, Optifine removed this setting in 1.8 so it always uses the multithreaded option, then again, the old default would probably give even worse performance due to how much more expensive chunk updates have become due to the complexity of block states and blockpos and all that):
That said, as mentioned above vanilla's multithreaded implementation can't be that good since chunk updates still cause lag spikes, which shouldn't happen if they were truly independent of the main render thread (which itself should only be telling the GPU what to render, it also possibly only multithreads chunk updates, as opposed to entity rendering, which is expensive since their entire model must be re-rendered every frame):
This is not a new issue, as suggested by the affected versions, but rendering has become more expensive due to internal changes; for example, even my highly optimized rendering code (single threaded) is highly bottlenecked by OpenGL itself, where it takes about 25 milliseconds to render a chunk section of leaves on Fancy, which is by itself a lag spike down to 40 FPS, from 1000+ on 8 chunks and 500+ on 16 chunks (even a worse-case situation like a Mega Forest still gets around 375 at max settings). Only about 10% of that time is in Java code, the other 90% is in calls to OpenGL to compile the data, and this is probably where 1.8+ multithreads things, but the Java code can still take too long, causing lag spikes:
Multicore processors are still better for Minecraft, don't forget there are other applications running in the background on your PC like processes for your operating system and other software you have installed. And lag spikes are always a bad thing, you should try to maintain 60+ fps at all times imo.
But dual core Intel CPU's with hyperthreading should still work fine, such as the one Princess_Garnet mentioned. AMD Ryzen is overkill for this type of game and there are plenty of older generation processors that'll run this game without issue provided you don't go too crazy with the render distances. With mods and/or higher render distances over 8 - 12 chunks, it gets more complicated though and the hardware demands increase if you enable either of these or run a multiplayer server. As a general rule you should run the game at the recommended system requirements or more for the best results.
The OP was asking specifically about the two processors listed. Minecraft is not going to use the 36 cores/threads or the 16 cores/threads for that matter, the higher frequency of the i9 9900k is most likely going to win out.
The OP was asking specifically about the two processors listed. Minecraft is not going to use the 36 cores/threads or the 16 cores/threads for that matter, the higher frequency of the i9 9900k is most likely going to win out.
In which case you're right, since the game isn't going to use 16 cores the one with the higher frequency within the same production line is going to win out. Even with the background processes factored in the i9 9900k's 8 cores and 16 threads will be plenty.
The OP was asking specifically about the two processors listed. Minecraft is not going to use the 36 cores/threads or the 16 cores/threads for that matter, the higher frequency of the i9 9900k is most likely going to win out.
That's what most prior replies are supporting; that excessive cores are going to do little, and in the case of Minecraft, that threshold is very low as the game seems largely tied up by one thread/main gameplay loop or whatever you want to call it. For example, and this is far from all encompassing, but here's CPU utilization for me at this given moment (this is 1.14 with OptiFine on a render distance of 32 in the overworld shortly after loading the world and playing for a bit).
But wait a minute, doesn't that lend credence to needing four (or more) cores (or at least threads)? No, take a closer look. While that LOOKS like four heavily utilized threads, each spike on one of the four apparently loaded cores comes only when there's a dips on the other three; what is happening here is one thread being shuffled around, but it's still one thread (should also be noted for similar, plus other reasons, that Windows Task Manager is good for getting a general idea but is far from the best precise measuring tool for this). This CPU has SMT too so every other graph isn't a proper core but to be taken with the graph before it, so even if all four of those heavier looking graphs were fully loaded constantly, that'd still potentially be "only" two threads as opposed to, and certainly less than, four anyway.
Again, this one graph isn't an all encompassing claim that Minecraft might not enjoy the resources of additional cores and threads under certain circumstances; certain aspects might certainly briefly enjoy the extra resources (LAN play with extra players/chunks involved? Having the other dimensions loaded? Chunk loading?) but in my experience through the years is that is is largely ONE thread that ultimately limits and impacts Minecraft performance most, similarly to what the example above shows. So, yes, a faster architecture paired with a higher clock speed will win out over extra cores/threads in the case of Minecraft (something like a Core i5 10600K would probably be the "cheapest best" thing for Minecraft right now?), and since even the budget offerings of the current CPUs (not counting APUs or laptop chips) are now mostly quad core CPUs with SMT, it's largely something you shouldn't even factor for this game as you'll probably always have more than enough (and if not, you're likely dealing with either an older and/or lower clocked dual core or something, in which case the lack of cores won't even be the most pressing limitation).
That's what most prior replies are supporting; that excessive cores are going to do little, and in the case of Minecraft, that threshold is very low as the game seems largely tied up by one thread/main gameplay loop or whatever you want to call it. For example, and this is far from all encompassing, but here's CPU utilization for me at this given moment (this is 1.14 with OptiFine on a render distance of 32 in the overworld shortly after loading the world and playing for a bit).
But wait a minute, doesn't that lend credence to needing four (or more) cores (or at least threads)? No, take a closer look. While that LOOKS like four heavily utilized threads, each spike on one of the four apparently loaded cores comes only when there's a dips on the other three; what is happening here is one thread being shuffled around, but it's still one thread (should also be noted for similar, plus other reasons, that Windows Task Manager is good for getting a general idea but is far from the best precise measuring tool for this). This CPU has SMT too so every other graph isn't a proper core but to be taken with the graph before it, so even if all four of those heavier looking graphs were fully loaded constantly, that'd still potentially be "only" two threads as opposed to, and certainly less than, four anyway.
Again, this one graph isn't an all encompassing claim that Minecraft might not enjoy the resources of additional cores and threads under certain circumstances; certain aspects might certainly briefly enjoy the extra resources (LAN play with extra players/chunks involved? Having the other dimensions loaded? Chunk loading?) but in my experience through the years is that is is largely ONE thread that ultimately limits and impacts Minecraft performance most, similarly to what the example above shows. So, yes, a faster architecture paired with a higher clock speed will win out over extra cores/threads in the case of Minecraft (something like a Core i5 10600K would probably be the "cheapest best" thing for Minecraft right now?), and since even the budget offerings of the current CPUs (not counting APUs or laptop chips) are now mostly quad core CPUs with SMT, it's largely something you shouldn't even factor for this game as you'll probably always have more than enough (and if not, you're likely dealing with either an older and/or lower clocked dual core or something, in which case the lack of cores won't even be the most pressing limitation).
The extra money that could have been spent on CPU's with extra cores on them are better invested in a dedicated GPU of some sort, even a budget one that is not too old should be capable of Minecraft. GTX 1050's or GTX 1660's will work fine. Next to worry about is the solid state drive, and the RAM to go along with it. As you figured out already your AMD Ryzen 3700X is massively overkill, but you probably use your computer for more demanding things. The AMD Ryzen 3 3200G with its 4 cores would be a great low priced alternative to the i9 9900k for Minecraft, if building from scratch.
[p]It would be combined with an RTX 2080 Ti and 128gb of RAM.[/p]
extreme sites list - place with list of best rape\incest sites and other extreme xxx movies
Would completely depend on what you plan on doing setting wise with the i9 9980xe. Its out of the box settings would most likely be beat by the i9900k because Java Minecraft runs on a single core and generally speaking benefits more from the higher clocks speeds. Minecraft would not benefit from the additional cores of the i9 9980xe. You could in theory tweak the settings of the i9 9980xe to change that though.
http://www.cpu-world.com/Compare/199/Intel_Core_i9_Extreme_Edition_i9-9980XE_vs_Intel_Core_i9_i9-9900K.html#:~:text=The%20Intel%20i9%2D9900K%20is,Intel%20Core%20i9%2D9980XE%20microprocessor.
This is far more than enough processor, graphics card, and RAM for vanilla minecraft...with high-end shaders, large render distance, and HD resourcepacks. Could probably throw in a bunch of not-well-written datapacks, as well.
You would also be able to easily play all the largest Forge modpacks with only moderate stuttering "out of the box" (with some tinkering I suppose you could get better results), but on the Fabric side there'd be essentially no loading time at all (seriously, it probably loads even faster than TMC's mod and he's already fixed 90% of the things wrong with vanilla) even if you added every single Fabric mod ever put out (minus those that are incompatible, of course).
Not since 1.3.1, and especially since 1.8, even when only referring to the client (in multiplayer, as singleplayer runs an integrated server on a separate thread, in addition to a file I/O thread and several other minor threads - and of course, the JVM itself, including garbage collection, the main performance bottleneck for Java applications, unless you are using REALLY outdated JVM arguments or Java version, as parallel/concurrent GC has been the default for a long time):
Likewise, the creator of Optifine says you need at least a quad-core CPU to get decent performance for 1.8+ and even older versions would run best on a quad-core CPU (my old computer had a dual-core CPU and Optifine's multicore rendering setting caused lag spikes similar to vanilla 1.8+; unfortunately, Optifine removed this setting in 1.8 so it always uses the multithreaded option, then again, the old default would probably give even worse performance due to how much more expensive chunk updates have become due to the complexity of block states and blockpos and all that):
That said, as mentioned above vanilla's multithreaded implementation can't be that good since chunk updates still cause lag spikes, which shouldn't happen if they were truly independent of the main render thread (which itself should only be telling the GPU what to render, it also possibly only multithreads chunk updates, as opposed to entity rendering, which is expensive since their entire model must be re-rendered every frame):
MC-123584 Updating blocks creates lag spikes proportional to geometry in chunk section
This is not a new issue, as suggested by the affected versions, but rendering has become more expensive due to internal changes; for example, even my highly optimized rendering code (single threaded) is highly bottlenecked by OpenGL itself, where it takes about 25 milliseconds to render a chunk section of leaves on Fancy, which is by itself a lag spike down to 40 FPS, from 1000+ on 8 chunks and 500+ on 16 chunks (even a worse-case situation like a Mega Forest still gets around 375 at max settings). Only about 10% of that time is in Java code, the other 90% is in calls to OpenGL to compile the data, and this is probably where 1.8+ multithreads things, but the Java code can still take too long, causing lag spikes:
TheMasterCaver's First World - possibly the most caved-out world in Minecraft history - includes world download.
TheMasterCaver's World - my own version of Minecraft largely based on my views of how the game should have evolved since 1.6.4.
Why do I still play in 1.6.4?
Less than quad core CPUs work fine still; my Core 2 Duo E8600 and Core i3 4010U (okay, it has Hyper-threading for four threads, but it's also a paltry 1.7 Ghz low wattage dual core laptop CPU) play even modern versions relatively fine. They won't meet the demands I have when playing on my desktop with very far render distances, obviously (but the GPU also comes to play there), but they work for more "normal" expected cases. The game indeed made great strides with taking advantage of extra cores for certain things (other dimensions, chunk load, etc. I guess?), but the main gameplay loop is still ultimately going to be something that is determined more by raw performance per core.
That said, I had a Core i5 2500K overclocked. It was aged but had good IPC still. I recently upgraded to a Ryzen 7 3700X (which I am currently not overclocking). While it's faster per core, it's not worlds faster in IPC than the older CPU since Intel was just so good there back then (and IPC has gone up ever slowly since Sandy Bridge). Yet, the improvement in chunk loading speed and frame rate maximum/smoothness at higher render distances (over 32... when it works that is so only older versions) surprised/impressed me. If modern versions actually rendered beyond 32 with consistency, I think 48 chunks would be realistically playable (at least, it was in 1.10.2 when I tested, when the older CPU started giving up more and more past around 36 or 38 to maybe 42 at best).
That said, any modern Ryzen (especially Zen 2) or recent Intel (Skylake plus) quad core (ideally at good clock speeds and with SMT) will be absolutely fine enough for this game even if you're pushing it. If you're not, you can get by on far less.
Multicore processors are still better for Minecraft, don't forget there are other applications running in the background on your PC like processes for your operating system and other software you have installed. And lag spikes are always a bad thing, you should try to maintain 60+ fps at all times imo.
But dual core Intel CPU's with hyperthreading should still work fine, such as the one Princess_Garnet mentioned. AMD Ryzen is overkill for this type of game and there are plenty of older generation processors that'll run this game without issue provided you don't go too crazy with the render distances. With mods and/or higher render distances over 8 - 12 chunks, it gets more complicated though and the hardware demands increase if you enable either of these or run a multiplayer server. As a general rule you should run the game at the recommended system requirements or more for the best results.
The OP was asking specifically about the two processors listed. Minecraft is not going to use the 36 cores/threads or the 16 cores/threads for that matter, the higher frequency of the i9 9900k is most likely going to win out.
In which case you're right, since the game isn't going to use 16 cores the one with the higher frequency within the same production line is going to win out. Even with the background processes factored in the i9 9900k's 8 cores and 16 threads will be plenty.
https://ark.intel.com/content/www/us/en/ark/products/186605/intel-core-i9-9900k-processor-16m-cache-up-to-5-00-ghz.html
That's what most prior replies are supporting; that excessive cores are going to do little, and in the case of Minecraft, that threshold is very low as the game seems largely tied up by one thread/main gameplay loop or whatever you want to call it. For example, and this is far from all encompassing, but here's CPU utilization for me at this given moment (this is 1.14 with OptiFine on a render distance of 32 in the overworld shortly after loading the world and playing for a bit).
But wait a minute, doesn't that lend credence to needing four (or more) cores (or at least threads)? No, take a closer look. While that LOOKS like four heavily utilized threads, each spike on one of the four apparently loaded cores comes only when there's a dips on the other three; what is happening here is one thread being shuffled around, but it's still one thread (should also be noted for similar, plus other reasons, that Windows Task Manager is good for getting a general idea but is far from the best precise measuring tool for this). This CPU has SMT too so every other graph isn't a proper core but to be taken with the graph before it, so even if all four of those heavier looking graphs were fully loaded constantly, that'd still potentially be "only" two threads as opposed to, and certainly less than, four anyway.
Again, this one graph isn't an all encompassing claim that Minecraft might not enjoy the resources of additional cores and threads under certain circumstances; certain aspects might certainly briefly enjoy the extra resources (LAN play with extra players/chunks involved? Having the other dimensions loaded? Chunk loading?) but in my experience through the years is that is is largely ONE thread that ultimately limits and impacts Minecraft performance most, similarly to what the example above shows. So, yes, a faster architecture paired with a higher clock speed will win out over extra cores/threads in the case of Minecraft (something like a Core i5 10600K would probably be the "cheapest best" thing for Minecraft right now?), and since even the budget offerings of the current CPUs (not counting APUs or laptop chips) are now mostly quad core CPUs with SMT, it's largely something you shouldn't even factor for this game as you'll probably always have more than enough (and if not, you're likely dealing with either an older and/or lower clocked dual core or something, in which case the lack of cores won't even be the most pressing limitation).
The extra money that could have been spent on CPU's with extra cores on them are better invested in a dedicated GPU of some sort, even a budget one that is not too old should be capable of Minecraft. GTX 1050's or GTX 1660's will work fine. Next to worry about is the solid state drive, and the RAM to go along with it. As you figured out already your AMD Ryzen 3700X is massively overkill, but you probably use your computer for more demanding things. The AMD Ryzen 3 3200G with its 4 cores would be a great low priced alternative to the i9 9900k for Minecraft, if building from scratch.
https://www.amazon.co.uk/AMD-Ryzen-Processor-X570-Plus-Motherboard/dp/B07X12HMY3/ref=pd_rhf_gw_p_img_1?_encoding=UTF8&refRID=GKT0CWWEH30K5RA0QER6&th=1
Are thos Chinese Xeon server proessors any good?