I always get mad when people say they have these micro lag spikes and then other people say stuff like "install optifine", "reinstall java", "allocate more ram" blah blah blah. I even saw someone on the minecraft reddit get downvoted for saying that the issue is garbage collection, so HE GOT DOWNVOTED FOR SAYING THE TRUTH. The issue is, even if you have good PC specs, there is an issue with the way minecraft is coded which will give you a micro lag spike whenever you generate new terrain. I have an AMD Ryzen 7 3700x and RTX 2070 with 4GB of ram allocated to minecraft AND using optifine, and whenever I set my render distance higher than 16 (which I should be able to do with no issues considering my specs), these micro-stutters become more noticeable. The thing is, vanilla minecraft doesn't even show your real-time FPS, it only shows your average FPS, which is pretty useless when it comes to showing these lag spikes. However, if you install optifine, it shows you the average AND minimum fps, and that's where you can start to see what's going on. Basically, if you stand still in a world, you're fine. However as soon as you start to load more chunks, you'll see the minimum fps drop from whatever it was (in my case well over 200 fps with 20+ chunks) to around 20-30 fps for a split second, and that's where that mini lag spike comes from. I tried all the possible JVM arguments you could try and this still happens. Obviously this doesn't make the game unplayable, however it is quite annoying that (maybe unless you overclock) no matter how good your specs are, you will still experience some degree of stuttering. Go ahead and try it for yourself, install optifine, set your render distance to 20 and start flying around in creative (you'll get the best results if you have good specs). Heck, you don't even need to be flying, this stuttering occurs even while you're sprinting.
One way to fix this issue that I noticed other youtubers use and is pretty smart, is to host a localhost server and join it. This way, you are splitting the load onto more cpu cores (since as we all know, minecraft unfortunately only uses 1 cpu core most of the time). I can get very playable fps (200+) and significantly less stutters this way, even with a high render distance.
One more thing: yes, I am 100% sure that it's the garbage collection causing these stutters. I opened the lagometer in optifine and the mini lag spikes coincide with the orange spike in the graph that represents the garbage collection.
minecraft unfortunately only uses 1 cpu core most of the time
Not true - ever since 1.3.1 the game has used a separate thread for the internal server in singleplayer, enabling the game to run on two cores, with many more threads added in later versions; 1.8 multithreaded rendering, 1.13 multithreaded world generation, 1.14 multithreaded lighting; single-core CPUs are now practically unusable (mind that even before 1.3.1 the garbage collector can run on its own thread if a concurrent GC is used, which has been the default for a long time). My own experience is that 1.3.1-1.6.4 performs better than 1.2.5 (I ran 1.2.5 once on my old computer, with a dual-core CPU, and it had significantly more stuttering when generating terrain).
Not that this helps much since the game allocates crazy amounts of garbage, more than the JVM was designed to handle, due to changes made in 1.8 and later versions, in addition to excessively complex spaghetti code:
Minecraft 1.8 has so many performance problems that I just don't know where to start with.
Maybe the biggest and the ugliest problem is the memory allocation. Currently the game allocates (and throws away immediately) 50 MB/sec when standing still and up to 200 MB/sec when moving. That is just crazy.
The old Notch code was straightforward and relatively easy to follow. The new rendering system is an over-engineered monster full of factories, builders, bakeries, baked items, managers, dispatchers, states, enums and layers.
Object allocation is rampant, small objects are allocated like there is no tomorrow. No wonder that the garbage collector has to work so hard.
The multithreaded chunk loading is crude and it will need a lot of optimizations in order to behave properly. Currently it works best with multi-core systems, quad-core is optimal, dual-core suffers a bit and single-core CPUs are practically doomed with vanilla. Lag spikes are present with all types of CPU.
(for comparison, I can run modded 1.6.4 with only 256 MB allocated, yes, 256 MB, though I normally allocate 512, and it only allocates perhaps 10 MB/sec when flying around in Creative, 1-2 when standing still)
Rendering is also extremely expensive due to many other changes (e.g. it is much more expensive to simply read a block position in newer versions); despite the claimed multithreading (or because it is (still!) so bad) it still causes lag spikes when rendering chunks, especially complex chunks:
I noticed in 1.12.x that getBlockState (in World, Chunk, and ChunkCache) accounted for substantial amount of CPU overhead. I developed a block state cache (write-through direct-mapped cache using a specially tuned hash to map from coordinates to cache entries), which made a HUGE difference. That plus a BlockPos neighbor cache literally doubled Minecraft performance for the test cases we tried. The laggiest one was TT's jungle tree farm. In Vanilla 1.12.2, it starts out at about 18TPS, although after some JIT-ing, it rises to about 35TPS (based on the reciprocal of the time spend not sleeping between ticks). These caches increase it to about 70TPS. In 1.13, I knew that getBlockState was going to get more expensive, at lease because of the extra liquid layer, but I had my concerns about block numbers being fully abstracted (the flattening and all that). This was a performance problem for 1.12.x. It's going to be really serious in 1.13.
This is worsened by the fact that Mojang apparently decided that 30 FPS was a good minimum framerate and that the game can perform as many chunk updates as possible as long as it doesn't drop below this; it should only update as many chunks as the idle time between frames allows and respect the FPS limit as a minimum allowed framerate (as long as the game can reach it while updating a minimum of 1 chunk per frame):
Also, OpenGL itself is part of the problem as it has issues with multithreading:
It's already mostly parallel. The only part that isn't is the part that directly talks to opengl and that's also the part that is no longer artificially capped on this pre-release. And it can't be reliably made parallel because opengl doesn't officially support multithreading. While it works on some GPU drivers, it breaks on others (like 1.7.10 optifine multi core chunk loading, it actually breaks on some hardware configurations)
In particular, my own experience is that rendering is mostly bottlenecked by OpenGL itself, at least with my own rendering code based on 1.6.4's - rendering a chunk section full of leaves on Fancy takes about 25 milliseconds, enough to drop FPS to 40 by itself - and over 90% of that time is spent in a call to OpenGL, as opposed to the Java-based rendering code, so multithreading the Java code, as 1.8+ presumably does, would barely make an improvement (e.g. 2 threads would reduce the time by 0.7 ms, an overall improvement of only 2.8%):
This is worsened by the fact that Mojang apparently decided that 30 FPS was a good minimum framerate and that the game can perform as many chunk updates as possible as long as it doesn't drop below this
It would be nice if 30 fps was actually the minimum, but unfortunately it often stutters down even lower, to around 20 fps sometimes. I had a friend test this (just load up a world and fly around in creative with a render distance of 20) and he got the exact same type of stuttering (maybe he had 5-10 more fps than me during the stutters) and he has an rtx 2080 super. It is just mind boggling that you can actually experience lag in a block game with an rtx 2080 super. I really hope mojang solves this issue. Have a good day and thanks for the detailed response!
There's really a lot to address with your post. I'm not very well versed on how the details of everything work regarding Java or Minecraft code, so I'll refrain from saying too much, but I do have a basic understanding from observation through the years of how Minecraft performs overall paired with the same on the hardware side of things itself.
For vanilla Minecraft, having an excessively high end GPU doesn't let you get rid of the performance peculiarities that don't even come from lack of GPU power to begin with. Minecraft isn't like most games, where the GPU matters most; it's CPU heavy (you can get away with much less than you may think on just the GPU-side, at least for the most part [the spikes are from lack of CPU while loading chunks, while the GPU is otherwise holding a constant 60 FPS with v-sync/triple buffering/some visual add-ons]), and while it is multi-threaded in some regards, and has been making strides to become more so over the years, one area where CPUs have slowed on progress over roughly the last decade or so is IPC/per core performance (AMD has made huge strides here recently, but they've been catching up from behind, so the growth of the fastest at any given time over the years is what I'm referring to here).
The baseline CPUs offered today (referring to Core i3s/Ryzen 3s) are all mostly quad cores with SMT capability (4 cores and 8 threads) at a minimum, a few exceptions aside, and the mainstream and higher end offer vastly more, but this is (as far as I can tell) more than Minecraft can use to boost performance for the most part. It seems to suffer on dual cores with no SMT capability (that is why the image above has those spikes worse during chunk loading, and both cores were 100%), but if it has 4 threads (either 2 cores with SMT or 4 cores without), it seems happy enough. Throwing vastly more cores at it doesn't do much in my observation.
Meanwhile, the "normal" render distance has gone from 8 (with a supposed cap of 16, but being bugged to 10 between 1.3 and 1.6.4) to defaulting to 12, and allowing it to be set up to 32 (or even 64 with the popular OptiFine), and this is a large exponential demand incurred on the CPU (every doubling of render distance is a quadrupling of loaded chunks, so the demands aren't linear but are closer to exponential). Once you start talking about ~24+ chunks (which is about what I personally consider the start of "very high" render distances), you need a good/highly clocked CPU for best results, and stutter, which happens regardless, will simply be increased. If you go back to some older versions, which, in many ways, DID perform better, you'll see they also perform worse in others (chunk loading speed comes to mind).
In other words, CPU growth slowed in many ways but Minecraft norms and expectations grew. Between that, some things DID get worse but some seemingly (my observation) got better.
On top of that, getting back to the GPU side, being a "block game" does nothing to reduce the performance demands of the game, and many PC enthusiasts/gamers seem to fall into the mistake of equating simply visuals with "it shouldn't need much to run and a high end video card should let there be zero performance issues". Most other games are paper thin geometry with assets populated in; Minecraft is a world of blocks you can interact with that all get loaded in/updated in real time (obviously, occlusion culling is used, but still) that is effectively run in a virtual machine, which, as I said, is more on the CPU than GPU anyway.
All that being said, from what I'm seeing from some links (mostly from what The MasterCaver links to), they've made some changes to allow chunk loading to be sped up at the cost of frame rate or something? I can't speak much for that (I thought OptiFine allows something similar, and on my new CPU, I actually set this to 5 and still don't notice the stutters at all, even though they show up in the frame time graph), but it seems they've allowed the game to consider using enough CPU time to where it drops frame rates as low as ~30 to speed up chunk loading (so... I'm not sure if the OptiFine setting is effective anymore or if they work in tandem or if they do separate things entirely). This certainly might be adding to the stutters, but I do know from years of playing that Minecraft has always dropped frame rates with either higher render distances or during chunk loading, let alone both. It WOULD be nice if the frame rate slider more served as the threshold for this though.
So is there no CPU that can completely remove those stutters or at least make them not go below 60fps without overclocking? I know some popular youtubers use the i9-9900K with like 32 render distance and they don't get alot, if any stutters but I assume that's because they overclocked it, since the i9 has great overclocking potential (up to 5ghz)
Actually, it's more about how well your system can manage RAM - a high RAM speed coupled with a CPU with enough cores to have one dedicated to just memory management can make a world of difference. Won't increase overall FPS by too much but it will help eliminate the stuttering. The problem is most people don't even take RAM speed into consideration, and when the game is allocating something crazy like 128mb+ per second (that's 1 GB every 8 seconds!) fast RAM is a must.
So is there no CPU that can completely remove those stutters or at least make them not go below 60fps without overclocking? I know some popular youtubers use the i9-9900K with like 32 render distance and they don't get alot, if any stutters but I assume that's because they overclocked it, since the i9 has great overclocking potential (up to 5ghz)
I have no idea. My last CPU was a Core i5 2500K (4 GHz). It could absolutely run a render distance of 32 and a bit beyond, and it was from 2011 and not even overclocked that far, but it wasn't staying above 60 FPS the entire time doing it. My new CPU does up to a render distance of 48 rather admirably (60 FPS once all is settled, can't achieve that at 64 though), but again, it won't be there all the time; chunk loading incurs drops. Rather than repeat what I said elsewhere, I'll link it here.
Are you sure these YouTube videos are constantly running above 60 FPS? Unless you see the frame time graph, you can't be sure. It can appear smooth but have the drops. My own game is like that; I normally run at a render distance of 16 (sharers are why) and it's entirely smooth, like more smooth than I've seen Minecraft since 1.7 released. The stutters show in the graph, but I never, ever feel them. They definitely wouldn't show in a video. Minecraft just incurs that penalty when loading chunks (versus not loading them) and you can't outright remove it; all you can do is lower it below a threshold to where it's not an issue. A better fix, if what TheMasterCaver says is correct, would be for Mojang to change the supposed recent changes regarding trading off frame time for chunk loading, but that still wouldn't remove the fact that it simply takes CPU cycles to do it and would incur a penalty; again, it'd just lower the impact.
Also need to factor in differences with Minecraft version, if OptiFine is present, its version, setting used, etc.
As for fast RAM, I've never had very fast RAM. My last RAM was DDR3 1,600 MHz with like... 9 timings I think (?) and my current stuff is DDR4 3,600 MHz with 16-19-19 timings; not very great as far as I know but not awfully slow either.
It is possible to eliminate the impact of rendering chunks if it can be ensured that they will never take more time to render than the idle time between frames; my own system takes the frame time and subtracts the client tick time and the time taken to render the current frame from it, then it allows at least one chunk update, more if time allows, with the remaining time decreased by the time needed to render each chunk. It is still possible for a chunk to take so long it extends the overall frame time, resulting in a frame rate drop but given that I get around 1000 FPS at 8 chunk render distance and cap it to 75 FPS this leaves a huge margin (1000 FPS is a frame time of 1 ms; 75 FPS is 13.33 ms, leaving 12.3 ms to update chunks). I also have slider that sets the percentage of time that chunk updates can take, as a percentage of the time idle time between frames and client ticks 9the latter can cause stuttering or frame rate variance at 20 FPS), which I set to 25% by default, meaning that chunk updates can take no more than about 3 ms, which gives 3-4 chunk updates per frame under normal conditions (for an average loaded height of 6 sections there are 1734 chunk sections to render, at 3 chunk updates per frame at 75 FPS it will take 7.7 seconds to render everything, In actuality, chunk rendering appears much faster since near chunks and chunks within the view frustum are prioritized, and simple sections like ones with solid ground are much faster).
// Used to control rate of chunk updates; uses actual FPS when vsync is enabled to accommodate higher framerates
public int getChunkUpdateLimit(int fps)
{
return (this.enableVsync ? Math.max(this.vSyncLimit, fps) : this.limitFramerate);
}
int limit = this.mc.gameSettings.getChunkUpdateLimit(this.mc.debugFPS);
if (limit == 0)
{
this.renderWorld(par1, 0L);
}
else
{
// Controls time allotted to chunk updates, from 0-100% of the frame time set by FPSLimit.
// Also subtracts time taken by Minecraft.runTick() to help smooth out variations.
this.renderWorld(par1, (long)(1000000000.0D / (double)var20 * (double)this.mc.gameSettings.chunkUpdateTime) - currentTickTime);
}
// Subtracts time taken to render current frame from time allotted to chunk updates
profiler.endStartSection("chunkUpdates");
long end = System.nanoTime();
chunkUpdateTime -= (end - start);
// Note that at least one chunk update may be performed regardless of time available
while (!this.updateRenderers(x, y, z) && chunkUpdateTime > 0L)
{
long time = (System.nanoTime() - end);
if (time < 0L || time > chunkUpdateTime) break;
}
Of course, chunk updates can still significantly reduce the frame rate when it is unlimited (even 1 ms will reduce 1000 FPS to 500 FPS; at 16 chunks I get around 400 FPS, which will drop to 285); the only way around this would be to perform partial updates (which is in fact what Optifine's "smooth" chunk loading option did back before 1.8, which eliminated stuttering due to chunk loading better than even its multi-core option; unfortunately, they removed the chunk loading options in 1.8). In theory, VBOs, which are exclusively used in the latest versions (I believe the game now entirely relies on "modern" OpenGL), also allow partial modifications (i.e. single block updates) to the vertex data on the GPU without having to rendering the entire chunk (this would be most beneficial for things like redstone, mining, and building, as opposed to loading new chunks), which is their biggest advantage over (deprecated) display lists, as used by 1.6.4). Setting the chunk update time to 100% will also cause drops since if there is any time remaining, even 1 nanosecond, it will perform another update, which will exceed the maximum time allowed (likewise, setting it to 0 will not disable chunk updates, only reduce them to 1 per frame).
Also, if it is known that the OpenGL implementation properly handles multithreading (this can be detected via GPU driver/vendor version, or given as a user-changeable setting, sort of like how Optifine's multithreaded rendering prior to 1.8 would cause visual artifacts on some systems but it was optional) then they can multithread the upload of data to the GPU (which as mentioned before can be very expensive, at least in the case of display lists), as this is currently the biggest bottleneck, no matter how many threads might be used to generate the vertex data in Java code.
In addition, it is important to cull as many hidden faces as possible; for example, vanilla 1.6.4 (don't know about newer versions) doesn't cull the faces of blocks below blocks like snow layers and carpet, or the sides adjacent to other blocks of the same type (by default only opaque cube blocks will cull adjacent faces), resulting in an average of 5 hidden faces being rendered per block - by culling all of these faces I reduced the number of rendered faces from 6 to 1; a slight increase in time spent in Java code (which is still faster than vanilla due to other optimizations) due to having to check neighboring blocks is far offset by the reduction in time spent in OpenGL (using the examples given before, compared to typical terrain a chunk section of leaves on Fancy took about 2.5 times longer in Java code but 100 times longer in OpenGL, mostly because there was far more vertex data; 4096 blocks rendering all faces contains 98304 vertices). Biome blend should also be cached, I render chunk sections in columns so it only needs to be computed once per column (256 vs 4096 times for a full section, with 15x15 roughly doubling the time in Java code (which isn't much when compared to the OpenGL time, which is independent of biome blend and even smooth lighting, since the game still sends lighting data; these settings only change how much time the Java-side code takes; likewise, they have no effect on the steady-state frame rate).
In particular, the latest versions are not culling hidden faces from leaves on Fast, completely negating the benefit of Fast leaves; a similar issue occurred in 1.8 and likely explains the awful performance I saw in that version on my old computer, especially in jungles (the impact isn't as extreme as mentioned above since there are far less leaves per chunk section, even with TMCW's "mega trees", but it is still significant):
This is also why I added a "fast" Fancy leaves setting which hides interior faces more than 1 block deep, which can significantly improve performance in "mega" forest biomes while not significantly affecting visual quality (adding giant trees is quite taxing on the game and requires many optimizations, hence why vanilla doesn't have any really large trees; they even temporarily disabled big oak trees in 1.7; however, I didn't have any issues with adding forests of much larger trees, even bigger than the ones added by this mod (see the "technical notes") despite only having a 2.2 GHz dual-core Athlon 64 and GeForce 7600GS, dating back to 2005-6, as long as I didn't use Fancy leaves (a lack of VRAM was a bigger issue than chunk update lag, I'd drop from 80-120 FPS to single digits, same for anything higher than 8 chunks for more than a short time).
Actually, it's more about how well your system can manage RAM - a high RAM speed coupled with a CPU with enough cores to have one dedicated to just memory management can make a world of difference. Won't increase overall FPS by too much but it will help eliminate the stuttering. The problem is most people don't even take RAM speed into consideration, and when the game is allocating something crazy like 128mb+ per second (that's 1 GB every 8 seconds!) fast RAM is a must.
So basically, anyone on a system with DDR2 memory or older is going to be out of luck.
I don't think anyone uses memory that old though. There are still some people using DDR3 but DDR2 is extremely old and basically extinct.
Anyone who does is likely strictly using it for retro PC gaming, but even then what is the point? the vast majority of old PC games I know of work fine on Windows 10. G.O.G practically has their business model out of ensuring maximum compatibility for old games on the latest PC's. Newer PC's can do it more efficiently and with less electrical waste as heat. In fact backwards compatibility is better on PC's than it is on consoles.
I always get mad when people say they have these micro lag spikes and then other people say stuff like "install optifine", "reinstall java", "allocate more ram" blah blah blah. I even saw someone on the minecraft reddit get downvoted for saying that the issue is garbage collection, so HE GOT DOWNVOTED FOR SAYING THE TRUTH. The issue is, even if you have good PC specs, there is an issue with the way minecraft is coded which will give you a micro lag spike whenever you generate new terrain. I have an AMD Ryzen 7 3700x and RTX 2070 with 4GB of ram allocated to minecraft AND using optifine, and whenever I set my render distance higher than 16 (which I should be able to do with no issues considering my specs), these micro-stutters become more noticeable. The thing is, vanilla minecraft doesn't even show your real-time FPS, it only shows your average FPS, which is pretty useless when it comes to showing these lag spikes. However, if you install optifine, it shows you the average AND minimum fps, and that's where you can start to see what's going on. Basically, if you stand still in a world, you're fine. However as soon as you start to load more chunks, you'll see the minimum fps drop from whatever it was (in my case well over 200 fps with 20+ chunks) to around 20-30 fps for a split second, and that's where that mini lag spike comes from. I tried all the possible JVM arguments you could try and this still happens. Obviously this doesn't make the game unplayable, however it is quite annoying that (maybe unless you overclock) no matter how good your specs are, you will still experience some degree of stuttering. Go ahead and try it for yourself, install optifine, set your render distance to 20 and start flying around in creative (you'll get the best results if you have good specs). Heck, you don't even need to be flying, this stuttering occurs even while you're sprinting.
One way to fix this issue that I noticed other youtubers use and is pretty smart, is to host a localhost server and join it. This way, you are splitting the load onto more cpu cores (since as we all know, minecraft unfortunately only uses 1 cpu core most of the time). I can get very playable fps (200+) and significantly less stutters this way, even with a high render distance.
One more thing: yes, I am 100% sure that it's the garbage collection causing these stutters. I opened the lagometer in optifine and the mini lag spikes coincide with the orange spike in the graph that represents the garbage collection.
Not true - ever since 1.3.1 the game has used a separate thread for the internal server in singleplayer, enabling the game to run on two cores, with many more threads added in later versions; 1.8 multithreaded rendering, 1.13 multithreaded world generation, 1.14 multithreaded lighting; single-core CPUs are now practically unusable (mind that even before 1.3.1 the garbage collector can run on its own thread if a concurrent GC is used, which has been the default for a long time). My own experience is that 1.3.1-1.6.4 performs better than 1.2.5 (I ran 1.2.5 once on my old computer, with a dual-core CPU, and it had significantly more stuttering when generating terrain).
Not that this helps much since the game allocates crazy amounts of garbage, more than the JVM was designed to handle, due to changes made in 1.8 and later versions, in addition to excessively complex spaghetti code:
(for comparison, I can run modded 1.6.4 with only 256 MB allocated, yes, 256 MB, though I normally allocate 512, and it only allocates perhaps 10 MB/sec when flying around in Creative, 1-2 when standing still)
Rendering is also extremely expensive due to many other changes (e.g. it is much more expensive to simply read a block position in newer versions); despite the claimed multithreading (or because it is (still!) so bad) it still causes lag spikes when rendering chunks, especially complex chunks:
MC-164123 Poor FPS performance with new rendering engine
MC-123584 Updating blocks creates lag spikes proportional to geometry in chunk section
This is worsened by the fact that Mojang apparently decided that 30 FPS was a good minimum framerate and that the game can perform as many chunk updates as possible as long as it doesn't drop below this; it should only update as many chunks as the idle time between frames allows and respect the FPS limit as a minimum allowed framerate (as long as the game can reach it while updating a minimum of 1 chunk per frame):
https://www.reddit.com/user/sliced_lime/comments/e00ohm/a_word_or_two_about_performance_in_minecraft/
Also, OpenGL itself is part of the problem as it has issues with multithreading:
In particular, my own experience is that rendering is mostly bottlenecked by OpenGL itself, at least with my own rendering code based on 1.6.4's - rendering a chunk section full of leaves on Fancy takes about 25 milliseconds, enough to drop FPS to 40 by itself - and over 90% of that time is spent in a call to OpenGL, as opposed to the Java-based rendering code, so multithreading the Java code, as 1.8+ presumably does, would barely make an improvement (e.g. 2 threads would reduce the time by 0.7 ms, an overall improvement of only 2.8%):
TheMasterCaver's First World - possibly the most caved-out world in Minecraft history - includes world download.
TheMasterCaver's World - my own version of Minecraft largely based on my views of how the game should have evolved since 1.6.4.
Why do I still play in 1.6.4?
It would be nice if 30 fps was actually the minimum, but unfortunately it often stutters down even lower, to around 20 fps sometimes. I had a friend test this (just load up a world and fly around in creative with a render distance of 20) and he got the exact same type of stuttering (maybe he had 5-10 more fps than me during the stutters) and he has an rtx 2080 super. It is just mind boggling that you can actually experience lag in a block game with an rtx 2080 super. I really hope mojang solves this issue. Have a good day and thanks for the detailed response!
EDIT: he also has an i9-9900K cpu.
There's really a lot to address with your post. I'm not very well versed on how the details of everything work regarding Java or Minecraft code, so I'll refrain from saying too much, but I do have a basic understanding from observation through the years of how Minecraft performs overall paired with the same on the hardware side of things itself.
For vanilla Minecraft, having an excessively high end GPU doesn't let you get rid of the performance peculiarities that don't even come from lack of GPU power to begin with. Minecraft isn't like most games, where the GPU matters most; it's CPU heavy (you can get away with much less than you may think on just the GPU-side, at least for the most part [the spikes are from lack of CPU while loading chunks, while the GPU is otherwise holding a constant 60 FPS with v-sync/triple buffering/some visual add-ons]), and while it is multi-threaded in some regards, and has been making strides to become more so over the years, one area where CPUs have slowed on progress over roughly the last decade or so is IPC/per core performance (AMD has made huge strides here recently, but they've been catching up from behind, so the growth of the fastest at any given time over the years is what I'm referring to here).
The baseline CPUs offered today (referring to Core i3s/Ryzen 3s) are all mostly quad cores with SMT capability (4 cores and 8 threads) at a minimum, a few exceptions aside, and the mainstream and higher end offer vastly more, but this is (as far as I can tell) more than Minecraft can use to boost performance for the most part. It seems to suffer on dual cores with no SMT capability (that is why the image above has those spikes worse during chunk loading, and both cores were 100%), but if it has 4 threads (either 2 cores with SMT or 4 cores without), it seems happy enough. Throwing vastly more cores at it doesn't do much in my observation.
Meanwhile, the "normal" render distance has gone from 8 (with a supposed cap of 16, but being bugged to 10 between 1.3 and 1.6.4) to defaulting to 12, and allowing it to be set up to 32 (or even 64 with the popular OptiFine), and this is a large exponential demand incurred on the CPU (every doubling of render distance is a quadrupling of loaded chunks, so the demands aren't linear but are closer to exponential). Once you start talking about ~24+ chunks (which is about what I personally consider the start of "very high" render distances), you need a good/highly clocked CPU for best results, and stutter, which happens regardless, will simply be increased. If you go back to some older versions, which, in many ways, DID perform better, you'll see they also perform worse in others (chunk loading speed comes to mind).
In other words, CPU growth slowed in many ways but Minecraft norms and expectations grew. Between that, some things DID get worse but some seemingly (my observation) got better.
On top of that, getting back to the GPU side, being a "block game" does nothing to reduce the performance demands of the game, and many PC enthusiasts/gamers seem to fall into the mistake of equating simply visuals with "it shouldn't need much to run and a high end video card should let there be zero performance issues". Most other games are paper thin geometry with assets populated in; Minecraft is a world of blocks you can interact with that all get loaded in/updated in real time (obviously, occlusion culling is used, but still) that is effectively run in a virtual machine, which, as I said, is more on the CPU than GPU anyway.
All that being said, from what I'm seeing from some links (mostly from what The MasterCaver links to), they've made some changes to allow chunk loading to be sped up at the cost of frame rate or something? I can't speak much for that (I thought OptiFine allows something similar, and on my new CPU, I actually set this to 5 and still don't notice the stutters at all, even though they show up in the frame time graph), but it seems they've allowed the game to consider using enough CPU time to where it drops frame rates as low as ~30 to speed up chunk loading (so... I'm not sure if the OptiFine setting is effective anymore or if they work in tandem or if they do separate things entirely). This certainly might be adding to the stutters, but I do know from years of playing that Minecraft has always dropped frame rates with either higher render distances or during chunk loading, let alone both. It WOULD be nice if the frame rate slider more served as the threshold for this though.
So is there no CPU that can completely remove those stutters or at least make them not go below 60fps without overclocking? I know some popular youtubers use the i9-9900K with like 32 render distance and they don't get alot, if any stutters but I assume that's because they overclocked it, since the i9 has great overclocking potential (up to 5ghz)
Actually, it's more about how well your system can manage RAM - a high RAM speed coupled with a CPU with enough cores to have one dedicated to just memory management can make a world of difference. Won't increase overall FPS by too much but it will help eliminate the stuttering. The problem is most people don't even take RAM speed into consideration, and when the game is allocating something crazy like 128mb+ per second (that's 1 GB every 8 seconds!) fast RAM is a must.
Creator of Metroid Cubed 3, a Metroid-themed mod! Become a donator today!
I have no idea. My last CPU was a Core i5 2500K (4 GHz). It could absolutely run a render distance of 32 and a bit beyond, and it was from 2011 and not even overclocked that far, but it wasn't staying above 60 FPS the entire time doing it. My new CPU does up to a render distance of 48 rather admirably (60 FPS once all is settled, can't achieve that at 64 though), but again, it won't be there all the time; chunk loading incurs drops. Rather than repeat what I said elsewhere, I'll link it here.
https://www.minecraftforum.net/forums/support/java-edition-support/3039340-low-fps-on-decent-specs#c13
Are you sure these YouTube videos are constantly running above 60 FPS? Unless you see the frame time graph, you can't be sure. It can appear smooth but have the drops. My own game is like that; I normally run at a render distance of 16 (sharers are why) and it's entirely smooth, like more smooth than I've seen Minecraft since 1.7 released. The stutters show in the graph, but I never, ever feel them. They definitely wouldn't show in a video. Minecraft just incurs that penalty when loading chunks (versus not loading them) and you can't outright remove it; all you can do is lower it below a threshold to where it's not an issue. A better fix, if what TheMasterCaver says is correct, would be for Mojang to change the supposed recent changes regarding trading off frame time for chunk loading, but that still wouldn't remove the fact that it simply takes CPU cycles to do it and would incur a penalty; again, it'd just lower the impact.
Also need to factor in differences with Minecraft version, if OptiFine is present, its version, setting used, etc.
As for fast RAM, I've never had very fast RAM. My last RAM was DDR3 1,600 MHz with like... 9 timings I think (?) and my current stuff is DDR4 3,600 MHz with 16-19-19 timings; not very great as far as I know but not awfully slow either.
It is possible to eliminate the impact of rendering chunks if it can be ensured that they will never take more time to render than the idle time between frames; my own system takes the frame time and subtracts the client tick time and the time taken to render the current frame from it, then it allows at least one chunk update, more if time allows, with the remaining time decreased by the time needed to render each chunk. It is still possible for a chunk to take so long it extends the overall frame time, resulting in a frame rate drop but given that I get around 1000 FPS at 8 chunk render distance and cap it to 75 FPS this leaves a huge margin (1000 FPS is a frame time of 1 ms; 75 FPS is 13.33 ms, leaving 12.3 ms to update chunks). I also have slider that sets the percentage of time that chunk updates can take, as a percentage of the time idle time between frames and client ticks 9the latter can cause stuttering or frame rate variance at 20 FPS), which I set to 25% by default, meaning that chunk updates can take no more than about 3 ms, which gives 3-4 chunk updates per frame under normal conditions (for an average loaded height of 6 sections there are 1734 chunk sections to render, at 3 chunk updates per frame at 75 FPS it will take 7.7 seconds to render everything, In actuality, chunk rendering appears much faster since near chunks and chunks within the view frustum are prioritized, and simple sections like ones with solid ground are much faster).
Of course, chunk updates can still significantly reduce the frame rate when it is unlimited (even 1 ms will reduce 1000 FPS to 500 FPS; at 16 chunks I get around 400 FPS, which will drop to 285); the only way around this would be to perform partial updates (which is in fact what Optifine's "smooth" chunk loading option did back before 1.8, which eliminated stuttering due to chunk loading better than even its multi-core option; unfortunately, they removed the chunk loading options in 1.8). In theory, VBOs, which are exclusively used in the latest versions (I believe the game now entirely relies on "modern" OpenGL), also allow partial modifications (i.e. single block updates) to the vertex data on the GPU without having to rendering the entire chunk (this would be most beneficial for things like redstone, mining, and building, as opposed to loading new chunks), which is their biggest advantage over (deprecated) display lists, as used by 1.6.4). Setting the chunk update time to 100% will also cause drops since if there is any time remaining, even 1 nanosecond, it will perform another update, which will exceed the maximum time allowed (likewise, setting it to 0 will not disable chunk updates, only reduce them to 1 per frame).
Also, if it is known that the OpenGL implementation properly handles multithreading (this can be detected via GPU driver/vendor version, or given as a user-changeable setting, sort of like how Optifine's multithreaded rendering prior to 1.8 would cause visual artifacts on some systems but it was optional) then they can multithread the upload of data to the GPU (which as mentioned before can be very expensive, at least in the case of display lists), as this is currently the biggest bottleneck, no matter how many threads might be used to generate the vertex data in Java code.
In addition, it is important to cull as many hidden faces as possible; for example, vanilla 1.6.4 (don't know about newer versions) doesn't cull the faces of blocks below blocks like snow layers and carpet, or the sides adjacent to other blocks of the same type (by default only opaque cube blocks will cull adjacent faces), resulting in an average of 5 hidden faces being rendered per block - by culling all of these faces I reduced the number of rendered faces from 6 to 1; a slight increase in time spent in Java code (which is still faster than vanilla due to other optimizations) due to having to check neighboring blocks is far offset by the reduction in time spent in OpenGL (using the examples given before, compared to typical terrain a chunk section of leaves on Fancy took about 2.5 times longer in Java code but 100 times longer in OpenGL, mostly because there was far more vertex data; 4096 blocks rendering all faces contains 98304 vertices). Biome blend should also be cached, I render chunk sections in columns so it only needs to be computed once per column (256 vs 4096 times for a full section, with 15x15 roughly doubling the time in Java code (which isn't much when compared to the OpenGL time, which is independent of biome blend and even smooth lighting, since the game still sends lighting data; these settings only change how much time the Java-side code takes; likewise, they have no effect on the steady-state frame rate).
In particular, the latest versions are not culling hidden faces from leaves on Fast, completely negating the benefit of Fast leaves; a similar issue occurred in 1.8 and likely explains the awful performance I saw in that version on my old computer, especially in jungles (the impact isn't as extreme as mentioned above since there are far less leaves per chunk section, even with TMCW's "mega trees", but it is still significant):
MC-179383 Leaves not culled with graphics set to fast
This is also why I added a "fast" Fancy leaves setting which hides interior faces more than 1 block deep, which can significantly improve performance in "mega" forest biomes while not significantly affecting visual quality (adding giant trees is quite taxing on the game and requires many optimizations, hence why vanilla doesn't have any really large trees; they even temporarily disabled big oak trees in 1.7; however, I didn't have any issues with adding forests of much larger trees, even bigger than the ones added by this mod (see the "technical notes") despite only having a 2.2 GHz dual-core Athlon 64 and GeForce 7600GS, dating back to 2005-6, as long as I didn't use Fancy leaves (a lack of VRAM was a bigger issue than chunk update lag, I'd drop from 80-120 FPS to single digits, same for anything higher than 8 chunks for more than a short time).
TheMasterCaver's First World - possibly the most caved-out world in Minecraft history - includes world download.
TheMasterCaver's World - my own version of Minecraft largely based on my views of how the game should have evolved since 1.6.4.
Why do I still play in 1.6.4?
So basically, anyone on a system with DDR2 memory or older is going to be out of luck.
https://www.transcend-info.com/Support/FAQ-292
I don't think anyone uses memory that old though. There are still some people using DDR3 but DDR2 is extremely old and basically extinct.
Anyone who does is likely strictly using it for retro PC gaming, but even then what is the point? the vast majority of old PC games I know of work fine on Windows 10. G.O.G practically has their business model out of ensuring maximum compatibility for old games on the latest PC's. Newer PC's can do it more efficiently and with less electrical waste as heat. In fact backwards compatibility is better on PC's than it is on consoles.