This happens on all minecraft versions. The GPU usage just isn't maxed out, like my FPS is locked or something. Using shaders increases the usage by a bit, but still doesn't max out GPU. For mods, I only have Optifine.
If you need my system specs, just ask. Thanks in advance for replying.
A PC is a combination of many parts. Not all of these parts will be used to 100% of all of the time, because sometimes some other part will be the slowest chain in the system at that moment. A PC will use what it needs. Trying to throw more utilization at one part when it is instead another part that is the limiting factor isn't going to get you more performance.
Minecraft is typically CPU limited. As in "it's always the CPU" with this game. It's not a GPU heavy game at all. Even something like a GT 1030 could run this game at 4K with high frame rates no problem. The GPU is typically only a big factor at very high render distances and/or with shaders.
Your screenshot only shows so much information, but based on what is shown (and what I know about Minecraft and hardware), you're most likely CPU limited there. Yes, your CPU utilization may only be 19% but modern CPUs are multi-core so overall utilization is pretty irrelevant. You can be CPU limited as soon as one core is fully used.
Edit: From what I recall, fabulous graphics allows for translucency sorting and fixes some other glass/shading issues, but it comes at a pretty big performance cost. I'd recommend considering fancy if you want more performance too. While it won't matter when you're CPU limited, with how high your GPU utilization is there in just vanilla, there may be moments where you're GPU limited as a result of using that setting.
I'd also recommend lowering the simulation distance (between 8 and 12). That setting can really hurt the performance and add to the CPU demand. Your render distance is 16 so it's only effectively there, but still. If you're that CPU limited, it might help some.
I just read that minecraft (especially Java edition) doesn't really make use of multiple cores, and that's why the overall CPU usage is very low but GPU usage doesn't get maxed out because of single-core bottleneck. Is there any way to allocate more cores? I will try with lower simulation distance and see if it improves. I also have 8 GB of RAM allocated to the game, could that be the issue too? (I also read that something called "garbage collection" which causes stutters if you allocate too much RAM)
I just read that minecraft (especially Java edition) doesn't really make use of multiple cores, and that's why the overall CPU usage is very low but GPU usage doesn't get maxed out because of single-core bottleneck.
This is correct(ish), but it's not exclusively a Minecraft thing. Most "real time" applications like games are ultimately going to be limited by one main game thread, even if they split off a dozen other threads. All of those other spun off threads need to be synced, and if the main gameplay loop is limited, then so is the game. Games can spin off things like sound, AI, and other stuff, but there comes a point where you can't do that without greatly increasing complexity or sometimes you may even incur performance loss (such as from the aforementioned syncing). In other words, more cores aren't a linear, magic increase for performance for all types of software. Many modern games do tend to use more cores/threads, but they typically have heavier engines that are doing more (namely, running more AI, running physics, compiling shaders, etc.). In my observation, Minecraft doesn't have a whole lot going on where it uses a whole lot of cores/threads (relative to modern CPU core/thread counts) to a high extent except maybe when it's burst generating/loading a lot of terrain, but it does technically multi-thread to a smaller degree. Ultimately though, yes, it's very much typically limited by a single core first and foremost.
No. Software has to be coded to be multi-threaded. It can't be forced.
It's like if you're playing a game that is limited by CPU or GPU processing and you try and add more RAM to the PC. It won't help. You can't force utilization in areas where you have spare resources to overcome a limitation in another.
As a rule with PCs, "the system will use it if it needs it". So if it's not using it, but performance is lower than desired, then it's low for some other reason, and trying to force more use of what you have spare of won't do anything.
I will try with lower simulation distance and see if it improves. I also have 8 GB of RAM allocated to the game, could that be the issue too? (I also read that something called "garbage collection" which causes stutters if you allocate too much RAM)
The RAM allocation is unlikely to be the issue, and modern Java versions/garbage collectors are a lot better about the "you can't allocate more RAM or it just spreads the garbage collections out but makes them more severe when they do occur" thing than it used to be. Even back in the days of Java 8, I never observed this to always be the case. The garbage collector would very regularly kick in before the RAM got near full, so in my experience, this was always overstated. At the same time, it could happen, and allocating more than you need is not beneficial so... there's no reason to do it.
What causes the stutter is that the game has to stop while the garbage collector is active, and this will take longer if it has more RAM to clear (it also takes longer the slower the CPU is).
My personal rule of thumb is that "vanilla-ish" Minecraft (no content mods, and a typical render distance of 16 give or take) shouldn't need over 3 GB to 4 GB allocated. 6 GB to 8 GB is more for situations where you're playing at high render distances (think 32), using Distant Horizons, using a lot of content mods, and/or "I have enough system RAM so the extra few GB being in use as insurance isn't hurting". Because allocating more RAM to the JVM will also increase memory use, and the collective game will use more than you allocate just to the JVM as heap space, so be sure to monitor that and make sure it's not pushing you close to your memory limits (don't allocate 8 GB to the JVM if you only have 8 GB RAM, basically).
Thanks. But I still don't understand something: if there is already a CPU bottleneck, why does using shaders (or going to graphically intensive areas like jungles) still increase the GPU load? Shouldn't it reduce the GPU load because the GPU will demand more data from the CPU and increase the bottleneck? For your information, I play at 1366x768 resolution, and here are my specs:
CPU: Intel core i5-11600KF (Not overclocked)
GPU: GT 730 4GB (Fermi version) (Overclocked to 851 MHz)
RAM: 2x8GB DDR4-3200 (From an unknown brand)
So as you can see here, surely the GT 730 won't be bottlenecked by the i5, right? I have seen the same CPU maxing out a GTX 1650 at 720p with 32 chunks render distance!
Thanks. But I still don't understand something: if there is already a CPU bottleneck, why does using shaders (or going to graphically intensive areas like jungles) still increase the GPU load? Shouldn't it reduce the GPU load because the GPU will demand more data from the CPU and increase the bottleneck?
Using shaders increase the GPU load because they are still more demanding on the GPU than vanilla Minecraft is. Whether you are already limited by the CPU or not is irrelevant in that case. You are, after all, still asking the GPU to do more by having it render shaders, and you might actually drop performance further if the GPU becomes the limiting factor.
For your information, I play at 1366x768 resolution, and here are my specs:
CPU: Intel core i5-11600KF (Not overclocked)
GPU: GT 730 4GB (Fermi version) (Overclocked to 851 MHz)
RAM: 2x8GB DDR4-3200 (From an unknown brand)
So as you can see here, surely the GT 730 won't be bottlenecked by the i5, right? I have seen the same CPU maxing out a GTX 1650 at 720p with 32 chunks render distance!
Okay, I'm a bit surprised. The frame rate shown in the first picture gave me the impression you were probably dealing with something much worse. That CPU should have absolutely no problem handling Minecraft better than your first picture suggests.
Show a full screenshot with the F3 menu showing, ideally with the frame time graph (F3+3). Try and take it while stationary and after chunks/terrain has loaded to rule out taking a snapshot at a particularly bad outlier moment.
That being said, 32 chunks is pretty heavy. Most CPUs will exhibit some stutter when generating/loading chunks, especially at high render distance, But your CPU should be good for reasonably higher render distances than 16 while only getting 37 FPS.
You have an "F" CPU which means no IGP so it can't be running on any integrated graphics, so we can skip that being a possible cause.
Your GPU is definitely the part that is out of place, but for Minecraft that shouldn't matter since it specifically wants a fast CPU and doesn't care as much about the GPU (outside shaders, which I honestly wouldn't even attempt on that). That being said, vanilla can be pretty bad at rendering specifically, so I'd recommend Fabric and Sodium and see if that helps.
Using the "fabulous" preset probably isn't helping. I'd echo my recommendation to switch to fancy.
The only other possible guess I'd have is VRAM being exhausted (that absolutely will drain performance, and vanilla absolutely will use a lot of it as the render distance increases). In your first screenshot at least it doesn't appear to be quite there yet, but if that's a 1 GB GPU, it's close enough that it might already be swapping. If you see shared VRAM in Task Manager going meaningfully above nothing, and if the "copy" GPU engine (right click in Task Manager and choose multiple engines for the GPU) is showing a lot of spiking activity when this happens, then this may be what's going on.
My personal rule of thumb is that "vanilla-ish" Minecraft (no content mods, and a typical render distance of 16 give or take) shouldn't need over 3 GB to 4 GB allocated.
I'd still consider this to be a major issue considering older versions only need a tenth as much; of course, using my own extremely large mod as an example (just one mod but the number of mods is meaningless, what matters is how much content they add, 500 blocks/items, 100 biomes and all their features, etc is nothing to scoff at, at the same time, many don't need that much resources / code / memory, like a new stair/slab based on an existing block? No new textures (which are each only 1 KB of raw data anyway) and no new code, just a new instance of a generic "stair/slab" block whose fields set its base block, name, ID, etc, then a recipe, all amounting to a few hundred bytes of overhead per additional block (of course, 1.8's custom block models will add more overhead but it still shouldn't be that much as it is just a list of vertex positions and texture coordinates but I've seen claims to the contrary):
An analysis of the resource usage of a couple blocks; on the left is my custom flower pot block, which holds references to about 23 KB of data due to the custom renderer which renders them as a tile entity; on the right is a more typical block, using only 217 bytes (not including 1 KB for a texture and the code contained in "BlockMetadataOre", which is shared across all ore blocks, as is the code in the "Block" base class):
A couple biomes, which use 1-2 KB each (and less than since I took this since I changed the entries in the mob spawn lists to share the same objects across every list, so only unique mob spawns add more overhead):
Similar to blocks, items vary a lot in size depending on their complexity and uniqueness, with most being closer to the lesser examples shown:
Texture memory shows up under the "TextureManager" class, the number of "TextureAtlasSprite" indicates how many block/item textures there are (which is a lot less than it could be due to the way I use many textures, e.g. I recolor a grayscale bed overlay texture so they only have a total of 10 textures instead of 96, even more textures are saved by flipping them for stalagmites/stalactites). The memory used by chunk render data doesn't show up within the JVM at all, it is all held in native memory / VRAM:
1/10 as much memory means the garbage collector only needs to do 1/10 as much work, it only drops by 100 MB per collection (this is more meaningful than the overall amount, and is also mainly due to using -Xmn128M, removing this reduces it, and overall usage, not even sure if it is beneficial, this is just what the launcher used to have) and even then it takes over half a minute to reach the next collection for 2-3 MB/s (I've seen screenshots of modern versions approaching 1 GB/s of allocations, which would be nearly 10 times per second, even the more typical 50-100 MB/s when standing still or not moving quickly is still more than an order of magnitude higher than the example shown).
Modern systems / Java garbage collectors being able to handle this better or not, it is an indisputable fact that modern versions need more resources and don't run as well as they could due to how they have been coded (all the optimization mods like Sodium are proof of this), even the developer of Optifine claimed that 90% of memory usage was completely unnecessary and due to "industry standard coding practices" (how does this even help them? Even the vanilla 1.8 jar is still larger than TMCW despite adding a lot less new content relative to 1.6.4, with many, many more classes, so the code is clearly much more complex overall, I've looked through it and find it much harder to follow and people had already described the game's code as "spaghetti code" before 1.8).
Using shaders increase the GPU load because they are still more demanding on the GPU than vanilla Minecraft is. Whether you are already limited by the CPU or not is irrelevant in that case. You are, after all, still asking the GPU to do more by having it render shaders, and you might actually drop performance further if the GPU becomes the limiting factor.
Okay, I'm a bit surprised. The frame rate shown in the first picture gave me the impression you were probably dealing with something much worse. That CPU should have absolutely no problem handling Minecraft better than your first picture suggests.
Show a full screenshot with the F3 menu showing, ideally with the frame time graph (F3+3). Try and take it while stationary and after chunks/terrain has loaded to rule out taking a snapshot at a particularly bad outlier moment.
That being said, 32 chunks is pretty heavy. Most CPUs will exhibit some stutter when generating/loading chunks, especially at high render distance, But your CPU should be good for reasonably higher render distances than 16 while only getting 37 FPS.
You have an "F" CPU which means no IGP so it can't be running on any integrated graphics, so we can skip that being a possible cause.
Your GPU is definitely the part that is out of place, but for Minecraft that shouldn't matter since it specifically wants a fast CPU and doesn't care as much about the GPU (outside shaders, which I honestly wouldn't even attempt on that). That being said, vanilla can be pretty bad at rendering specifically, so I'd recommend Fabric and Sodium and see if that helps.
Using the "fabulous" preset probably isn't helping. I'd echo my recommendation to switch to fancy.
The only other possible guess I'd have is VRAM being exhausted (that absolutely will drain performance, and vanilla absolutely will use a lot of it as the render distance increases). In your first screenshot at least it doesn't appear to be quite there yet, but if that's a 1 GB GPU, it's close enough that it might already be swapping. If you see shared VRAM in Task Manager going meaningfully above nothing, and if the "copy" GPU engine (right click in Task Manager and choose multiple engines for the GPU) is showing a lot of spiking activity when this happens, then this may be what's going on.
You might have overlooked it, but in the specs I mentioned the GPU has 4GB VRAM, so VRAM isn't an issue. Here is the screenshot. Weird how the F3 menu shows 100% GPU usage but it's actually using 72%. The frame time graph also isn't showing up properly.
I'd still consider this to be a major issue considering older versions only need a tenth as much
I don't think it's as much of an issue as you're making it out to be.
You're saying "older versions" needed less, but then you immediately move towards using your mod for examples? So you're not talking about old versions then.
I recommended 3 GB to 4 GB (and sometimes you may need more!), but there's also plenty of people who get by on the default of 2 GB. Sometimes, some people run into the limit because sometimes 2 GB is cutting it close, and this is why I recommended raising it.
It's an apples to oranges comparison to say older versions needed a tenth of what modern versions do based on comparing "can use as little as" and "should almost always be enough for". I'm recommending 3 GB or 4 GB as a case of the latter, and I know for a fact that older versions can sometimes use more than you probably think. The last time I tried 1.6 with OptiFine and attempted a render distance of 24, with 2 GB allocated, the game seemed like it almost crashed on world creation due to lack of RAM (the game froze for ~3 seconds and RAM use was just at 2 GB when the world did load) and it was flirting within the 1.2 GB to 2 GB range the entire time after. The 1.6 foundation is not as light as you've convinced yourself simply because you see lower amounts at lower render distances and with your even more optimized mod (or because you've seen a few particular screenshot examples where it's sub-100 MB at that moment).
I'm not going to call Minecraft optimized (I did the exact opposite in this thread) but I wish you'd stop using incomparable extremes to make this point. Your mod is not older versions. A recommended "should always be enough" value is not to be compared to what an older version gets away with in reduced demand (low render distance) or cherry picked moments. I've pushed high render distances in versions like 1.6, 1.7, and modern versions, so I have an idea of what the various versions approximately gravitate towards using, and while RAM use has definitely gone up since a version like 1.6 (a dozen years ago, by the way), it's not by as much as you state if you compare like for like. It's gone up, but so has how much RAM people have. There's little point in trying to say the game should still have to be using double digit MB of RAM when PCs with sub-8 GB are in trouble anyway simply because Windows/all other modern software pretty much push PCs towards higher capacities anyway.
You might have overlooked it, but in the specs I mentioned the GPU has 4GB VRAM, so VRAM isn't an issue. Here is the screenshot. Weird how the F3 menu shows 100% GPU usage but it's actually using 72%. The frame time graph also isn't showing up properly.
Oh, yeah, I did miss that. I presumed it was possibly 1 GB or maybe 2 GB if it was a fancier variant. I didn't know those were ever paired with 4 GB since it seems like such a waste for it. In that case, you're probably not exhausting VRAM unless you push the render distance very high, but do keep a watch on it.
The in-game GPU utilization and what other monitoring methods show may not always match up due to how they poll/measure things.
That's odd how the frame time graph doesn't show up and how the internal server graph does show up but is empty (but the F3 menu shows it's ~9 milliseconds at that point). I've never seen that. But it's fine; I mostly wanted to see the frame time graph to ensure the captured moment wasn't happening during some outlier spike, but I trust you that this is all it's giving for "baseline" performance.
That screenshot does indeed suggest the GPU is the limiting factor, which makes sense because that CPU is capable of a lot more performance than that as a baseline (in fact, a Core 2 Duo is if that puts it into light, despite being "unplayable" because its non-baseline performance when loading chunks is too erratic). So yeah, you probably actually do have something going on with the GPU limiting it. It seems comparable in performance to a GT 430, probably because the Fermi variant is the same thing rebadged. I wonder if this might actually be a case where the GPU actually is limiting it, at least in modern versions.
Go into the nVidia drivers and set "power management mode" to "prefer maximum performance" and see if that does anything. Probably not, but it's worth trying here. What that does is set the GPU run at boost clocks as its minimum and traditionally, Minecraft has had some issues with this. If it doesn't work, then set it back to default as you don't want it running full speed all the time for no reason.
If it is just modern being heavier, you can probably work around this by exchanging OptiFine for Fabric/Sodium, because more performant rendering is specifically what they do, so you might have a solution with that. I expect your best chance of improving this to be with this method. If not, the Fermi architecture of that level of performance might just be too far gone at this point and you might need a faster GPU. It's really, really underpowered compared to the rest of that system (it's not really much better integrated from the time, and modern integrated is faster) and it's not supported under current drivers anymore.
Edit: My mistake, I gave you the wrong key combination. You want F3+2 for the frame time graph and internal server graph. F3+3 is apparently ping, and it would be empty because you're on a single player world.
You're also using antialiasing, which will increase the demand on the GPU.
1. I already set power management to "Prefer maximum performance" in NVIDIA control panel. I prefer keeping everything running at MAXIMUM performance (or even more than maximum performance, namely by overclocking) all the time.
2. Using Fabric/Sodium isn't really a viable option for me, because I am a long time forge user. I also need the "Dynamic lighting" and zoom feature of Optifine.
3. But I don't have antialiasing enabled? It's turned off (it doesn't allow me to turn it on with fabulous graphics). I turned on "FXAA 4X" under "shaders" but shaders too aren't enabled (because of fabulous graphics).
4. OK, I will post another screenshot with F3+2 menu.
It may or may not be a desired solution, but at least try it with Sodium and see what results are. I also use OptiFine myself for two particular reasons, despite Sodium having some clear benefits, so I get wanting to stick with it for a particular reason. But trying Sodium just for troubleshooting purposes will help us see if the hardware performs as it should on a more optimized but still pretty "vanilla-ish" configuration.
FXAA is antialiasing, which is why I mentioned that. It's typically a pretty light form antialiasing, but it will increase the GPU demand on what seems to already be a source of performance limits, so it's worth trying to disable.
And while FXAA usually is light on performance demand, OptiFine's way of implementing its normal antialiasing has such a severe performance impact that I donsider it unusable. And oddly, it seems CPU limited and not GPU limited. I think the FXAA option is a vanilla shader, so whether it's using fancy lights or not, it may be using "shaders". I think OptiFine changed some of its antialiasing setting locations (and maybe implementation methods?) a bit since I last played so maybe some of this is a bit out of date, but the point is, try with all forms of antialiasing disabled.
Same with fabulous graphics. this is a very known cause of performance regression. Sodium doesn't even offer because there's simply not a way to implement what fabulous graphics do without severe performance implications, so they don't even offer it.
Whether you test with OptiFine or Sodium (I'd suggest trying both), you need to troubleshoot by removing these "extras" from the equation like antialiasing and fabulous graphics to at least establish a baseline and see where the severe cause of performance is coming from.
You're saying "older versions" needed less, but then you immediately move towards using your mod for examples? So you're not talking about old versions then.
Then why did the creator of Optifine make this suggestion as recently as 1.6.4?
Lauch Minecraft with less memory (yes, really). Usually it does not need more than 350 MB and runs fine on all settings with the default texture pack.
Fact is, how could I even make the game use so much less memory when the vast majority is being used to store millions of blocks in loaded chunks? That simply cannot be optimized (unless you compress the data in memory, then you have all the issues of quickly reading and writing to it). In fact, the largest optimization I made by far and away was simply deleting this line of code (the game keeps this allocated until a client-side out of memory error is thrown, presumably so there is enough memory to display the memory error screen, but at the same time it quits the current world and clears various other memory and I saw no issues when intentionally forcing an OOM, plus if it happens server-side the game crashes either way):
/** A 10MiB preallocation to ensure the heap is reasonably sized. */
public static byte[] memoryReserve = new byte[10485760];
Only a 10 MB reduction, which as noted easily offsets the impacts of the hundreds of features I've added, which is exactly where I think newer versions and most mods went wrong, besides how much more absurdly complex their code seems to be to do the same thing (I've seen people recommending gigabytes for modpacks for Beta 1.7.3, which itself should already be much more lightweight than 1.6.4 because there is no separate internal server loading a second copy of the world, something which I have not changed, as there are mods for versions like 1.6.4 that remove it for "true singleplayer"). Indeed, here is a screenshot and JVM profiler of vanilla 1.6.4 at max settings; memory usage was about 111 MB, of which about 61.5 MB was used by loaded chunks:
For comparison, these are the profiler results for TMCW at maximum settings (which is 16 chunks instead of 10, but without spawn chunks, which ends up loading about twice as many chunks) - memory usage is now about 164.4 MB (so actually higher, the amount used by chunks is 137.6 MB, over twice as high, as expected from the increased number of chunks loaded and illustrating there is no "magic" that significantly reduced it because you simply can't without some sort of fancy in-memory compression); the real difference is seen in the CPU usage (generally lower and stabilizing much faster after loading chunks, the spike was when I increased the render distance from 8 to 16, with chunks past that not having been generated yet):
The real comparison between these comes when you look at what is left after chunks - vanilla used 49.8 MB while TMCW used 26.8 MB, so I have indeed significantly reduced the baseline memory usage by nearly 50%, but it isn't very noticeable when including loaded chunks, and since terrain is higher on average you can expect them to use more (vanilla used 59.1 KB per chunk while TMCW used 64.7 KB), and this is far more variable than anything else; even if vanilla used 248 bytes per chunk render instance and TMCW used only 106 bytes this amounts to only 2.36 MB of additional memory at 16 chunks, about 9.5 at 32 if it were supported (or vanilla actually supported 16, you can see there are only 10816 "WorldRenderer" instances, not the 17424 "ChunkRenderer" that TMCW has).
You can go on an on about how TMCW is totally unrepresentative of older versions - well, it is but for the complete opposite reasons you keep claiming - because it should be consuming far more resources, and does show it if you look at its save files, which are way bigger, reflecting the increased complexity of the world, and actually, similar in size to modern (1.18+) versions (I know you posted some file sizes before):
Vanilla 1.6.4 ("Normal" was a more or less average region, "MaxHeight" was a Superflat world of solid stone to y=256 to illustrate how region files are only partially compressed, illustrated by the difference between the file and its zipped version, which in turn shows the "real" size of the chunk data contained within it):
TMCW (largest fully explored region; most are at least 8 MB, double the size of vanilla):
Then why did the creator of Optifine make this suggestion as recently as 1.6.4?
Why are you ignoring the part where I said you're making invalid comparisons?
I don't get it. You should be way smarter than this, yet time and time again you seem to do this; you fail to account for variables when making comparisons and it results in you attempting to make comparisons that have no meaning. Why? I thought it would be pretty basic knowledge, and for pretty obvious reasons, that such comparisons aren't proper.
I'm not even sure what any of that has to do with the thread anyway. The thread is about someone trying to find out why they have low performance. That's what we should be doing; helping find the cause of the low performance. And that's what I've been trying to do.
Personal opinions about Minecraft's memory use is neither here nor there. You already created a thread for discussing your thoughts on that, did you not? I'm not saying you can't bring it up in other discussions where it comes up, but support threads in particular should be kept to support efforts only because otherwise we're being rude to the thread starter. I know you're passionate about certain things (so am I, and so are all of us), and I honestly love hearing you discuss the things you're passionate about even if I don't always agree with them to the same extent because it's usually interesting, but... there's a time and place (and a correct method for doing so, and invalid comparisons are not it).
It may or may not be a desired solution, but at least try it with Sodium and see what results are. I also use OptiFine myself for two particular reasons, despite Sodium having some clear benefits, so I get wanting to stick with it for a particular reason. But trying Sodium just for troubleshooting purposes will help us see if the hardware performs as it should on a more optimized but still pretty "vanilla-ish" configuration.
FXAA is antialiasing, which is why I mentioned that. It's typically a pretty light form antialiasing, but it will increase the GPU demand on what seems to already be a source of performance limits, so it's worth trying to disable.
And while FXAA usually is light on performance demand, OptiFine's way of implementing its normal antialiasing has such a severe performance impact that I donsider it unusable. And oddly, it seems CPU limited and not GPU limited. I think the FXAA option is a vanilla shader, so whether it's using fancy lights or not, it may be using "shaders". I think OptiFine changed some of its antialiasing setting locations (and maybe implementation methods?) a bit since I last played so maybe some of this is a bit out of date, but the point is, try with all forms of antialiasing disabled.
Same with fabulous graphics. this is a very known cause of performance regression. Sodium doesn't even offer because there's simply not a way to implement what fabulous graphics do without severe performance implications, so they don't even offer it.
Whether you test with OptiFine or Sodium (I'd suggest trying both), you need to troubleshoot by removing these "extras" from the equation like antialiasing and fabulous graphics to at least establish a baseline and see where the severe cause of performance is coming from.
So actually the purpose of this thread was to max out my GPU usage and ideally keep it above 90% all the time. I honestly don't have any problem playing below 40 FPS, but when I saw that my GPU wasn't being used and it could obviously do better than this, I posted this thread because I can't stand the feeling of performance being wasted. So I would consider fabulous graphics as a baseline. I will try to turn off antialiasing though, but I don't have any idea how it's still turned on. I will try disabling FXAA under shaders. Please don't get into a dispute!
So actually the purpose of this thread was to max out my GPU usage and ideally keep it above 90% all the time.
Yeah, that was the original intent of it, but I thought as the discussion went on you were then trying to get better performance. Utilization is a pretty arbitrary thing on its own.
I posted this thread because I can't stand the feeling of performance being wasted.
Consider this.
Let's say you force the game to almost always be limited by the GPU in order to ensure that its performance is not being "wasted". This is pretty much what you've seemingly done anyway. Now though, it will be the CPU that is sitting there with performance overhead that is not being realized. So you merely shifted from "wasting GPU performance" to now "wasting CPU performance".
You can't get around this. All PCs are "wasting performance" to a point, because every PC always has a bottleneck. This is the reality because we don't have infinite performance, and there is no static way that software loads hardware (it is variable), so something will always be the slowest link in the chain.
Instead of worrying about it existing, since there is nothing that can eliminate it anyway, only worry about it if it results in performance below the level you desire.
If you're okay with the performance you're getting now, and you seem to be, then there's no problems.
If you're not okay with the performance you're getting now, then you would need to figure out which part is limiting performance most (for you, it would be the GPU), and then you either alleviate the burden on that part (one example of this would be to lower the settings that increase GPU load), or replace it with a faster performing part.
So I would consider fabulous graphics as a baseline.
I really recommend using fancy, as I advised earlier. Fabulous is just "fancy, but with a whole lot more GPU load for one or two minor benefits". You're unlikely to even notice the visual difference, but you will notice the performance loss.
Your screenshot tells the same story as before; you're incredibly GPU limited. While there is no mythical "balance" to a PC, that is still a pretty severe disparity there. You've got that system paired with a graphics card that's holding the overall PC back to performance levels many, many times below what it could otherwise be achieving here (and it's probably similar or even worse in other games). But if you're okay your current level of performance, then I don't see any action you need to take.
I said that I'm okay with playing below 40 FPS, but I'm NOT okay with stutters. I do know that every PC has a bottleneck, but I would rather have a GPU bottleneck than a CPU bottleneck. I also know that my CPU is NOT performing as it should, this i5 is actually a pretty decent CPU. I also tried fancy graphics, it improves the FPS, but my GPU usage still doesn't max out (rather it gets REDUCED to around 50%) and the game still stutters like crazy. I already posted about my poor CPU performance in techpowerup forums, but they didn't give me any answer. Please help me!
I could have bought a better GPU, like some mid range GTX, but they were all sold out in my country, probably due to the 2024 GPU shortage. So I had no option but to get this potato GPU.
Oh, I didn't see a Minecraft post on the TPU forums or I might have tried to answer it, haha.
The stutters are probably from the CPU and not the GPU, at least they likely are if they happen while moving around and loading/generating chunks, or if they coincide with a garbage collection (Orange spikes in the frame time graph). If it's the latter, you can try the new garbage collector to see if that helps, but everyone has different results on if this helps or not. Generally, I would expect that particular CPU shouldn't be stuttering too bad at that sort of render distance (maybe during the very opening moments of world creation I would expect it), especially if your frame rate is ~40 FPS as that should mask many of them. Elytra flight or fast flight in creative into ungenerated terrain might cause it too though. To some degree, expectations may have to be adjusted on this one. 1.8+ does terrain loading a lot faster than prior versions (and modern with Sodium, more so), but the silver lining to the older versions doing it slower was that they were generally more smooth as a tradeoff (at least between versions 1.3 and 1.6). Again, if you're willing to reconsider using Sodium, it generally helps when it comes to stutters, and it also helps when it comes to being limited by rendering performance, so that might very well be your best answer to both of your issues. It won't totally eliminate the stutters but it should improve it.
I just tried 1.21 on my old PC to get a baseline of how much GPU the game might want now (sans Sodium, since that improves rendering) and the GPU utilization was actually higher than I expected. For clarity, this is on a GDDR5 GT 1030, not the DDR3 one, which is like a whole third slower in performance (blame nVidia for constantly giving two very differently performing GPUs the same name over the years to trick consumers).
I tried to get settled moments to get a baseline of graphics performance since I already know the game is not playable at all on this CPU, so don't let the apparent smoothness fool you; it stutters bad if I start moving, or sometimes even just sitting there. Anyway, I'm playing at half the render distance you are, and with a much faster graphics card, and it's still being highly used. Then again, I've seen a GPU much, much faster than this be highly used too. Vanilla rendering lacks optimization for sure.
The in-game metrics aren't always known to be accurate on this one though, and that is worth mentioning, so take these results with a grain of Salt. Still, based on this, and based on the results you're actually getting, I echo my original suspicion that your GT 730 (or let's just call it a GT 430 since it is Fermi) is probably being overburdened. This is why I suggested trying to lighten the load on it, not increase it.
Oh, I didn't see a Minecraft post on the TPU forums or I might have tried to answer it, haha.
The stutters are probably from the CPU and not the GPU, at least they likely are if they happen while moving around and loading/generating chunks, or if they coincide with a garbage collection (Orange spikes in the frame time graph). If it's the latter, you can try the new garbage collector to see if that helps, but everyone has different results on if this helps or not. Generally, I would expect that particular CPU shouldn't be stuttering too bad at that sort of render distance (maybe during the very opening moments of world creation I would expect it), especially if your frame rate is ~40 FPS as that should mask many of them. Elytra flight or fast flight in creative into ungenerated terrain might cause it too though. To some degree, expectations may have to be adjusted on this one. 1.8+ does terrain loading a lot faster than prior versions (and modern with Sodium, more so), but the silver lining to the older versions doing it slower was that they were generally more smooth as a tradeoff (at least between versions 1.3 and 1.6). Again, if you're willing to reconsider using Sodium, it generally helps when it comes to stutters, and it also helps when it comes to being limited by rendering performance, so that might very well be your best answer to both of your issues. It won't totally eliminate the stutters but it should improve it.
I just tried 1.21 on my old PC to get a baseline of how much GPU the game might want now (sans Sodium, since that improves rendering) and the GPU utilization was actually higher than I expected. For clarity, this is on a GDDR5 GT 1030, not the DDR3 one, which is like a whole third slower in performance (blame nVidia for constantly giving two very differently performing GPUs the same name over the years to trick consumers).
I tried to get settled moments to get a baseline of graphics performance since I already know the game is not playable at all on this CPU, so don't let the apparent smoothness fool you; it stutters bad if I start moving, or sometimes even just sitting there. Anyway, I'm playing at half the render distance you are, and with a much faster graphics card, and it's still being highly used. Then again, I've seen a GPU much, much faster than this be highly used too. Vanilla rendering lacks optimization for sure.
The in-game metrics aren't always known to be accurate on this one though, and that is worth mentioning, so take these results with a grain of Salt. Still, based on this, and based on the results you're actually getting, I echo my original suspicion that your GT 730 (or let's just call it a GT 430 since it is Fermi) is probably being overburdened. This is why I suggested trying to lighten the load on it, not increase it.
Ok, how do I use the new garbage collector? And no, it still stutters even when moving around in a existing world, and creating a new world is a nightmare. I wish I could show you a video, but I don't think I can post videos here. I seriously don't expect such performance from this CPU. Could this have something to do with my motherboard? Because my CPU performance is still below expectations in general, not only in Minecraft. And for techpowerup, I did get replies, but they didn't really answer my actual question. The post was about the CPU performance in general, and didn't really have a lot of mentions about minecraft.
This happens on all minecraft versions. The GPU usage just isn't maxed out, like my FPS is locked or something. Using shaders increases the usage by a bit, but still doesn't max out GPU. For mods, I only have Optifine.
If you need my system specs, just ask. Thanks in advance for replying.
A PC is a combination of many parts. Not all of these parts will be used to 100% of all of the time, because sometimes some other part will be the slowest chain in the system at that moment. A PC will use what it needs. Trying to throw more utilization at one part when it is instead another part that is the limiting factor isn't going to get you more performance.
Minecraft is typically CPU limited. As in "it's always the CPU" with this game. It's not a GPU heavy game at all. Even something like a GT 1030 could run this game at 4K with high frame rates no problem. The GPU is typically only a big factor at very high render distances and/or with shaders.
Your screenshot only shows so much information, but based on what is shown (and what I know about Minecraft and hardware), you're most likely CPU limited there. Yes, your CPU utilization may only be 19% but modern CPUs are multi-core so overall utilization is pretty irrelevant. You can be CPU limited as soon as one core is fully used.
Edit: From what I recall, fabulous graphics allows for translucency sorting and fixes some other glass/shading issues, but it comes at a pretty big performance cost. I'd recommend considering fancy if you want more performance too. While it won't matter when you're CPU limited, with how high your GPU utilization is there in just vanilla, there may be moments where you're GPU limited as a result of using that setting.
I'd also recommend lowering the simulation distance (between 8 and 12). That setting can really hurt the performance and add to the CPU demand. Your render distance is 16 so it's only effectively there, but still. If you're that CPU limited, it might help some.
I just read that minecraft (especially Java edition) doesn't really make use of multiple cores, and that's why the overall CPU usage is very low but GPU usage doesn't get maxed out because of single-core bottleneck. Is there any way to allocate more cores? I will try with lower simulation distance and see if it improves. I also have 8 GB of RAM allocated to the game, could that be the issue too? (I also read that something called "garbage collection" which causes stutters if you allocate too much RAM)
This is correct(ish), but it's not exclusively a Minecraft thing. Most "real time" applications like games are ultimately going to be limited by one main game thread, even if they split off a dozen other threads. All of those other spun off threads need to be synced, and if the main gameplay loop is limited, then so is the game. Games can spin off things like sound, AI, and other stuff, but there comes a point where you can't do that without greatly increasing complexity or sometimes you may even incur performance loss (such as from the aforementioned syncing). In other words, more cores aren't a linear, magic increase for performance for all types of software. Many modern games do tend to use more cores/threads, but they typically have heavier engines that are doing more (namely, running more AI, running physics, compiling shaders, etc.). In my observation, Minecraft doesn't have a whole lot going on where it uses a whole lot of cores/threads (relative to modern CPU core/thread counts) to a high extent except maybe when it's burst generating/loading a lot of terrain, but it does technically multi-thread to a smaller degree. Ultimately though, yes, it's very much typically limited by a single core first and foremost.
No. Software has to be coded to be multi-threaded. It can't be forced.
It's like if you're playing a game that is limited by CPU or GPU processing and you try and add more RAM to the PC. It won't help. You can't force utilization in areas where you have spare resources to overcome a limitation in another.
As a rule with PCs, "the system will use it if it needs it". So if it's not using it, but performance is lower than desired, then it's low for some other reason, and trying to force more use of what you have spare of won't do anything.
The RAM allocation is unlikely to be the issue, and modern Java versions/garbage collectors are a lot better about the "you can't allocate more RAM or it just spreads the garbage collections out but makes them more severe when they do occur" thing than it used to be. Even back in the days of Java 8, I never observed this to always be the case. The garbage collector would very regularly kick in before the RAM got near full, so in my experience, this was always overstated. At the same time, it could happen, and allocating more than you need is not beneficial so... there's no reason to do it.
What causes the stutter is that the game has to stop while the garbage collector is active, and this will take longer if it has more RAM to clear (it also takes longer the slower the CPU is).
My personal rule of thumb is that "vanilla-ish" Minecraft (no content mods, and a typical render distance of 16 give or take) shouldn't need over 3 GB to 4 GB allocated. 6 GB to 8 GB is more for situations where you're playing at high render distances (think 32), using Distant Horizons, using a lot of content mods, and/or "I have enough system RAM so the extra few GB being in use as insurance isn't hurting". Because allocating more RAM to the JVM will also increase memory use, and the collective game will use more than you allocate just to the JVM as heap space, so be sure to monitor that and make sure it's not pushing you close to your memory limits (don't allocate 8 GB to the JVM if you only have 8 GB RAM, basically).
Thanks. But I still don't understand something: if there is already a CPU bottleneck, why does using shaders (or going to graphically intensive areas like jungles) still increase the GPU load? Shouldn't it reduce the GPU load because the GPU will demand more data from the CPU and increase the bottleneck? For your information, I play at 1366x768 resolution, and here are my specs:
CPU: Intel core i5-11600KF (Not overclocked)
GPU: GT 730 4GB (Fermi version) (Overclocked to 851 MHz)
RAM: 2x8GB DDR4-3200 (From an unknown brand)
So as you can see here, surely the GT 730 won't be bottlenecked by the i5, right? I have seen the same CPU maxing out a GTX 1650 at 720p with 32 chunks render distance!
Using shaders increase the GPU load because they are still more demanding on the GPU than vanilla Minecraft is. Whether you are already limited by the CPU or not is irrelevant in that case. You are, after all, still asking the GPU to do more by having it render shaders, and you might actually drop performance further if the GPU becomes the limiting factor.
Okay, I'm a bit surprised. The frame rate shown in the first picture gave me the impression you were probably dealing with something much worse. That CPU should have absolutely no problem handling Minecraft better than your first picture suggests.
Show a full screenshot with the F3 menu showing, ideally with the frame time graph (F3+3). Try and take it while stationary and after chunks/terrain has loaded to rule out taking a snapshot at a particularly bad outlier moment.
That being said, 32 chunks is pretty heavy. Most CPUs will exhibit some stutter when generating/loading chunks, especially at high render distance, But your CPU should be good for reasonably higher render distances than 16 while only getting 37 FPS.
You have an "F" CPU which means no IGP so it can't be running on any integrated graphics, so we can skip that being a possible cause.
Your GPU is definitely the part that is out of place, but for Minecraft that shouldn't matter since it specifically wants a fast CPU and doesn't care as much about the GPU (outside shaders, which I honestly wouldn't even attempt on that). That being said, vanilla can be pretty bad at rendering specifically, so I'd recommend Fabric and Sodium and see if that helps.
Using the "fabulous" preset probably isn't helping. I'd echo my recommendation to switch to fancy.
The only other possible guess I'd have is VRAM being exhausted (that absolutely will drain performance, and vanilla absolutely will use a lot of it as the render distance increases). In your first screenshot at least it doesn't appear to be quite there yet, but if that's a 1 GB GPU, it's close enough that it might already be swapping. If you see shared VRAM in Task Manager going meaningfully above nothing, and if the "copy" GPU engine (right click in Task Manager and choose multiple engines for the GPU) is showing a lot of spiking activity when this happens, then this may be what's going on.
I'd still consider this to be a major issue considering older versions only need a tenth as much; of course, using my own extremely large mod as an example (just one mod but the number of mods is meaningless, what matters is how much content they add, 500 blocks/items, 100 biomes and all their features, etc is nothing to scoff at, at the same time, many don't need that much resources / code / memory, like a new stair/slab based on an existing block? No new textures (which are each only 1 KB of raw data anyway) and no new code, just a new instance of a generic "stair/slab" block whose fields set its base block, name, ID, etc, then a recipe, all amounting to a few hundred bytes of overhead per additional block (of course, 1.8's custom block models will add more overhead but it still shouldn't be that much as it is just a list of vertex positions and texture coordinates but I've seen claims to the contrary):
An analysis of the resource usage of a couple blocks; on the left is my custom flower pot block, which holds references to about 23 KB of data due to the custom renderer which renders them as a tile entity; on the right is a more typical block, using only 217 bytes (not including 1 KB for a texture and the code contained in "BlockMetadataOre", which is shared across all ore blocks, as is the code in the "Block" base class):
A couple biomes, which use 1-2 KB each (and less than since I took this since I changed the entries in the mob spawn lists to share the same objects across every list, so only unique mob spawns add more overhead):
Similar to blocks, items vary a lot in size depending on their complexity and uniqueness, with most being closer to the lesser examples shown:
Texture memory shows up under the "TextureManager" class, the number of "TextureAtlasSprite" indicates how many block/item textures there are (which is a lot less than it could be due to the way I use many textures, e.g. I recolor a grayscale bed overlay texture so they only have a total of 10 textures instead of 96, even more textures are saved by flipping them for stalagmites/stalactites). The memory used by chunk render data doesn't show up within the JVM at all, it is all held in native memory / VRAM:
1/10 as much memory means the garbage collector only needs to do 1/10 as much work, it only drops by 100 MB per collection (this is more meaningful than the overall amount, and is also mainly due to using -Xmn128M, removing this reduces it, and overall usage, not even sure if it is beneficial, this is just what the launcher used to have) and even then it takes over half a minute to reach the next collection for 2-3 MB/s (I've seen screenshots of modern versions approaching 1 GB/s of allocations, which would be nearly 10 times per second, even the more typical 50-100 MB/s when standing still or not moving quickly is still more than an order of magnitude higher than the example shown).
Modern systems / Java garbage collectors being able to handle this better or not, it is an indisputable fact that modern versions need more resources and don't run as well as they could due to how they have been coded (all the optimization mods like Sodium are proof of this), even the developer of Optifine claimed that 90% of memory usage was completely unnecessary and due to "industry standard coding practices" (how does this even help them? Even the vanilla 1.8 jar is still larger than TMCW despite adding a lot less new content relative to 1.6.4, with many, many more classes, so the code is clearly much more complex overall, I've looked through it and find it much harder to follow and people had already described the game's code as "spaghetti code" before 1.8).
TheMasterCaver's First World - possibly the most caved-out world in Minecraft history - includes world download.
TheMasterCaver's World - my own version of Minecraft largely based on my views of how the game should have evolved since 1.6.4.
Why do I still play in 1.6.4?
You might have overlooked it, but in the specs I mentioned the GPU has 4GB VRAM, so VRAM isn't an issue. Here is the screenshot. Weird how the F3 menu shows 100% GPU usage but it's actually using 72%. The frame time graph also isn't showing up properly.
I don't think it's as much of an issue as you're making it out to be.
You're saying "older versions" needed less, but then you immediately move towards using your mod for examples? So you're not talking about old versions then.
I recommended 3 GB to 4 GB (and sometimes you may need more!), but there's also plenty of people who get by on the default of 2 GB. Sometimes, some people run into the limit because sometimes 2 GB is cutting it close, and this is why I recommended raising it.
It's an apples to oranges comparison to say older versions needed a tenth of what modern versions do based on comparing "can use as little as" and "should almost always be enough for". I'm recommending 3 GB or 4 GB as a case of the latter, and I know for a fact that older versions can sometimes use more than you probably think. The last time I tried 1.6 with OptiFine and attempted a render distance of 24, with 2 GB allocated, the game seemed like it almost crashed on world creation due to lack of RAM (the game froze for ~3 seconds and RAM use was just at 2 GB when the world did load) and it was flirting within the 1.2 GB to 2 GB range the entire time after. The 1.6 foundation is not as light as you've convinced yourself simply because you see lower amounts at lower render distances and with your even more optimized mod (or because you've seen a few particular screenshot examples where it's sub-100 MB at that moment).
I'm not going to call Minecraft optimized (I did the exact opposite in this thread) but I wish you'd stop using incomparable extremes to make this point. Your mod is not older versions. A recommended "should always be enough" value is not to be compared to what an older version gets away with in reduced demand (low render distance) or cherry picked moments. I've pushed high render distances in versions like 1.6, 1.7, and modern versions, so I have an idea of what the various versions approximately gravitate towards using, and while RAM use has definitely gone up since a version like 1.6 (a dozen years ago, by the way), it's not by as much as you state if you compare like for like. It's gone up, but so has how much RAM people have. There's little point in trying to say the game should still have to be using double digit MB of RAM when PCs with sub-8 GB are in trouble anyway simply because Windows/all other modern software pretty much push PCs towards higher capacities anyway.
Oh, yeah, I did miss that. I presumed it was possibly 1 GB or maybe 2 GB if it was a fancier variant. I didn't know those were ever paired with 4 GB since it seems like such a waste for it. In that case, you're probably not exhausting VRAM unless you push the render distance very high, but do keep a watch on it.
The in-game GPU utilization and what other monitoring methods show may not always match up due to how they poll/measure things.
That's odd how the frame time graph doesn't show up and how the internal server graph does show up but is empty (but the F3 menu shows it's ~9 milliseconds at that point). I've never seen that. But it's fine; I mostly wanted to see the frame time graph to ensure the captured moment wasn't happening during some outlier spike, but I trust you that this is all it's giving for "baseline" performance.
That screenshot does indeed suggest the GPU is the limiting factor, which makes sense because that CPU is capable of a lot more performance than that as a baseline (in fact, a Core 2 Duo is if that puts it into light, despite being "unplayable" because its non-baseline performance when loading chunks is too erratic). So yeah, you probably actually do have something going on with the GPU limiting it. It seems comparable in performance to a GT 430, probably because the Fermi variant is the same thing rebadged. I wonder if this might actually be a case where the GPU actually is limiting it, at least in modern versions.
Go into the nVidia drivers and set "power management mode" to "prefer maximum performance" and see if that does anything. Probably not, but it's worth trying here. What that does is set the GPU run at boost clocks as its minimum and traditionally, Minecraft has had some issues with this. If it doesn't work, then set it back to default as you don't want it running full speed all the time for no reason.
If it is just modern being heavier, you can probably work around this by exchanging OptiFine for Fabric/Sodium, because more performant rendering is specifically what they do, so you might have a solution with that. I expect your best chance of improving this to be with this method. If not, the Fermi architecture of that level of performance might just be too far gone at this point and you might need a faster GPU. It's really, really underpowered compared to the rest of that system (it's not really much better integrated from the time, and modern integrated is faster) and it's not supported under current drivers anymore.
Edit: My mistake, I gave you the wrong key combination. You want F3+2 for the frame time graph and internal server graph. F3+3 is apparently ping, and it would be empty because you're on a single player world.
You're also using antialiasing, which will increase the demand on the GPU.
1. I already set power management to "Prefer maximum performance" in NVIDIA control panel. I prefer keeping everything running at MAXIMUM performance (or even more than maximum performance, namely by overclocking) all the time.
2. Using Fabric/Sodium isn't really a viable option for me, because I am a long time forge user. I also need the "Dynamic lighting" and zoom feature of Optifine.
3. But I don't have antialiasing enabled? It's turned off (it doesn't allow me to turn it on with fabulous graphics). I turned on "FXAA 4X" under "shaders" but shaders too aren't enabled (because of fabulous graphics).
4. OK, I will post another screenshot with F3+2 menu.
It may or may not be a desired solution, but at least try it with Sodium and see what results are. I also use OptiFine myself for two particular reasons, despite Sodium having some clear benefits, so I get wanting to stick with it for a particular reason. But trying Sodium just for troubleshooting purposes will help us see if the hardware performs as it should on a more optimized but still pretty "vanilla-ish" configuration.
FXAA is antialiasing, which is why I mentioned that. It's typically a pretty light form antialiasing, but it will increase the GPU demand on what seems to already be a source of performance limits, so it's worth trying to disable.
And while FXAA usually is light on performance demand, OptiFine's way of implementing its normal antialiasing has such a severe performance impact that I donsider it unusable. And oddly, it seems CPU limited and not GPU limited. I think the FXAA option is a vanilla shader, so whether it's using fancy lights or not, it may be using "shaders". I think OptiFine changed some of its antialiasing setting locations (and maybe implementation methods?) a bit since I last played so maybe some of this is a bit out of date, but the point is, try with all forms of antialiasing disabled.
Same with fabulous graphics. this is a very known cause of performance regression. Sodium doesn't even offer because there's simply not a way to implement what fabulous graphics do without severe performance implications, so they don't even offer it.
Whether you test with OptiFine or Sodium (I'd suggest trying both), you need to troubleshoot by removing these "extras" from the equation like antialiasing and fabulous graphics to at least establish a baseline and see where the severe cause of performance is coming from.
Then why did the creator of Optifine make this suggestion as recently as 1.6.4?
Fact is, how could I even make the game use so much less memory when the vast majority is being used to store millions of blocks in loaded chunks? That simply cannot be optimized (unless you compress the data in memory, then you have all the issues of quickly reading and writing to it). In fact, the largest optimization I made by far and away was simply deleting this line of code (the game keeps this allocated until a client-side out of memory error is thrown, presumably so there is enough memory to display the memory error screen, but at the same time it quits the current world and clears various other memory and I saw no issues when intentionally forcing an OOM, plus if it happens server-side the game crashes either way):
Only a 10 MB reduction, which as noted easily offsets the impacts of the hundreds of features I've added, which is exactly where I think newer versions and most mods went wrong, besides how much more absurdly complex their code seems to be to do the same thing (I've seen people recommending gigabytes for modpacks for Beta 1.7.3, which itself should already be much more lightweight than 1.6.4 because there is no separate internal server loading a second copy of the world, something which I have not changed, as there are mods for versions like 1.6.4 that remove it for "true singleplayer"). Indeed, here is a screenshot and JVM profiler of vanilla 1.6.4 at max settings; memory usage was about 111 MB, of which about 61.5 MB was used by loaded chunks:
For comparison, these are the profiler results for TMCW at maximum settings (which is 16 chunks instead of 10, but without spawn chunks, which ends up loading about twice as many chunks) - memory usage is now about 164.4 MB (so actually higher, the amount used by chunks is 137.6 MB, over twice as high, as expected from the increased number of chunks loaded and illustrating there is no "magic" that significantly reduced it because you simply can't without some sort of fancy in-memory compression); the real difference is seen in the CPU usage (generally lower and stabilizing much faster after loading chunks, the spike was when I increased the render distance from 8 to 16, with chunks past that not having been generated yet):
The real comparison between these comes when you look at what is left after chunks - vanilla used 49.8 MB while TMCW used 26.8 MB, so I have indeed significantly reduced the baseline memory usage by nearly 50%, but it isn't very noticeable when including loaded chunks, and since terrain is higher on average you can expect them to use more (vanilla used 59.1 KB per chunk while TMCW used 64.7 KB), and this is far more variable than anything else; even if vanilla used 248 bytes per chunk render instance and TMCW used only 106 bytes this amounts to only 2.36 MB of additional memory at 16 chunks, about 9.5 at 32 if it were supported (or vanilla actually supported 16, you can see there are only 10816 "WorldRenderer" instances, not the 17424 "ChunkRenderer" that TMCW has).
You can go on an on about how TMCW is totally unrepresentative of older versions - well, it is but for the complete opposite reasons you keep claiming - because it should be consuming far more resources, and does show it if you look at its save files, which are way bigger, reflecting the increased complexity of the world, and actually, similar in size to modern (1.18+) versions (I know you posted some file sizes before):
Vanilla 1.6.4 ("Normal" was a more or less average region, "MaxHeight" was a Superflat world of solid stone to y=256 to illustrate how region files are only partially compressed, illustrated by the difference between the file and its zipped version, which in turn shows the "real" size of the chunk data contained within it):
TMCW (largest fully explored region; most are at least 8 MB, double the size of vanilla):
TheMasterCaver's First World - possibly the most caved-out world in Minecraft history - includes world download.
TheMasterCaver's World - my own version of Minecraft largely based on my views of how the game should have evolved since 1.6.4.
Why do I still play in 1.6.4?
Why are you ignoring the part where I said you're making invalid comparisons?
I don't get it. You should be way smarter than this, yet time and time again you seem to do this; you fail to account for variables when making comparisons and it results in you attempting to make comparisons that have no meaning. Why? I thought it would be pretty basic knowledge, and for pretty obvious reasons, that such comparisons aren't proper.
I'm not even sure what any of that has to do with the thread anyway. The thread is about someone trying to find out why they have low performance. That's what we should be doing; helping find the cause of the low performance. And that's what I've been trying to do.
Personal opinions about Minecraft's memory use is neither here nor there. You already created a thread for discussing your thoughts on that, did you not? I'm not saying you can't bring it up in other discussions where it comes up, but support threads in particular should be kept to support efforts only because otherwise we're being rude to the thread starter. I know you're passionate about certain things (so am I, and so are all of us), and I honestly love hearing you discuss the things you're passionate about even if I don't always agree with them to the same extent because it's usually interesting, but... there's a time and place (and a correct method for doing so, and invalid comparisons are not it).
So actually the purpose of this thread was to max out my GPU usage and ideally keep it above 90% all the time. I honestly don't have any problem playing below 40 FPS, but when I saw that my GPU wasn't being used and it could obviously do better than this, I posted this thread because I can't stand the feeling of performance being wasted. So I would consider fabulous graphics as a baseline. I will try to turn off antialiasing though, but I don't have any idea how it's still turned on. I will try disabling FXAA under shaders. Please don't get into a dispute!
Here is the screenshot.
Yeah, that was the original intent of it, but I thought as the discussion went on you were then trying to get better performance. Utilization is a pretty arbitrary thing on its own.
Consider this.
Let's say you force the game to almost always be limited by the GPU in order to ensure that its performance is not being "wasted". This is pretty much what you've seemingly done anyway. Now though, it will be the CPU that is sitting there with performance overhead that is not being realized. So you merely shifted from "wasting GPU performance" to now "wasting CPU performance".
You can't get around this. All PCs are "wasting performance" to a point, because every PC always has a bottleneck. This is the reality because we don't have infinite performance, and there is no static way that software loads hardware (it is variable), so something will always be the slowest link in the chain.
Instead of worrying about it existing, since there is nothing that can eliminate it anyway, only worry about it if it results in performance below the level you desire.
If you're okay with the performance you're getting now, and you seem to be, then there's no problems.
If you're not okay with the performance you're getting now, then you would need to figure out which part is limiting performance most (for you, it would be the GPU), and then you either alleviate the burden on that part (one example of this would be to lower the settings that increase GPU load), or replace it with a faster performing part.
I really recommend using fancy, as I advised earlier. Fabulous is just "fancy, but with a whole lot more GPU load for one or two minor benefits". You're unlikely to even notice the visual difference, but you will notice the performance loss.
Your screenshot tells the same story as before; you're incredibly GPU limited. While there is no mythical "balance" to a PC, that is still a pretty severe disparity there. You've got that system paired with a graphics card that's holding the overall PC back to performance levels many, many times below what it could otherwise be achieving here (and it's probably similar or even worse in other games). But if you're okay your current level of performance, then I don't see any action you need to take.
I could have bought a better GPU, like some mid range GTX, but they were all sold out in my country, probably due to the 2024 GPU shortage. So I had no option but to get this potato GPU.
Oh, I didn't see a Minecraft post on the TPU forums or I might have tried to answer it, haha.
The stutters are probably from the CPU and not the GPU, at least they likely are if they happen while moving around and loading/generating chunks, or if they coincide with a garbage collection (Orange spikes in the frame time graph). If it's the latter, you can try the new garbage collector to see if that helps, but everyone has different results on if this helps or not. Generally, I would expect that particular CPU shouldn't be stuttering too bad at that sort of render distance (maybe during the very opening moments of world creation I would expect it), especially if your frame rate is ~40 FPS as that should mask many of them. Elytra flight or fast flight in creative into ungenerated terrain might cause it too though. To some degree, expectations may have to be adjusted on this one. 1.8+ does terrain loading a lot faster than prior versions (and modern with Sodium, more so), but the silver lining to the older versions doing it slower was that they were generally more smooth as a tradeoff (at least between versions 1.3 and 1.6). Again, if you're willing to reconsider using Sodium, it generally helps when it comes to stutters, and it also helps when it comes to being limited by rendering performance, so that might very well be your best answer to both of your issues. It won't totally eliminate the stutters but it should improve it.
I just tried 1.21 on my old PC to get a baseline of how much GPU the game might want now (sans Sodium, since that improves rendering) and the GPU utilization was actually higher than I expected. For clarity, this is on a GDDR5 GT 1030, not the DDR3 one, which is like a whole third slower in performance (blame nVidia for constantly giving two very differently performing GPUs the same name over the years to trick consumers).
I tried to get settled moments to get a baseline of graphics performance since I already know the game is not playable at all on this CPU, so don't let the apparent smoothness fool you; it stutters bad if I start moving, or sometimes even just sitting there. Anyway, I'm playing at half the render distance you are, and with a much faster graphics card, and it's still being highly used. Then again, I've seen a GPU much, much faster than this be highly used too. Vanilla rendering lacks optimization for sure.
The in-game metrics aren't always known to be accurate on this one though, and that is worth mentioning, so take these results with a grain of Salt. Still, based on this, and based on the results you're actually getting, I echo my original suspicion that your GT 730 (or let's just call it a GT 430 since it is Fermi) is probably being overburdened. This is why I suggested trying to lighten the load on it, not increase it.
Just contact support
Ok, how do I use the new garbage collector? And no, it still stutters even when moving around in a existing world, and creating a new world is a nightmare. I wish I could show you a video, but I don't think I can post videos here. I seriously don't expect such performance from this CPU. Could this have something to do with my motherboard? Because my CPU performance is still below expectations in general, not only in Minecraft. And for techpowerup, I did get replies, but they didn't really answer my actual question. The post was about the CPU performance in general, and didn't really have a lot of mentions about minecraft.