As you can see here, Optifine 1.15.2 (preview version, I know this may be biased because it's a preview version, but I have tested this with vanilla Minecraft 1.14 and 1.15, no shaders and loaded up into a new world) in the Greenfield Minecraft map (I removed most entities, cause it had lagged on 1.14 due to the extremely high entity count). Compared to an acceptable 61 fps in this area in 1.14.4, 1.15.2 has the fps decrease by 32%!. I really thought that Minecraft 1.15 was the "bug fixes update", but no! It turns out that it bogs down even the most competent of gaming machines (I have a gaming laptop, with an i7 8750H and a 1660Ti (laptop version). I'm aware that Nvidia Optimus will decrease FPS, but FPS in survival minecraft still hovers around ~80fps with SEUS renewed).
Some people say that they prioritized chunk loading over FPS, but the thing is, what benefit is better chunk loading when you cannot play the game in the first place?!
And worst of all, this still happens:
rip thread 2
The only way that whatever FPS issues are fixed is if Mojang enables the parallelisation of ALL threads for everything, including chunk rendering, mob AI, MOB PRESENCE (Even without AI, 10000 entities will lag the game) and everything else to improve performance by a significant margin (I don't care if an entire update is dedicated on increasing FPS and having everything work with all cores, just like in Battlefield V).
Dont just focus on simple bugfixes, focus on Increasing FPS for everybody, not decreasing FPS in favour of "better chunk generation"
Allocate less memory to Minecraft, try -Xmx4G in your JVM arguments.
More RAM does not mean more FPS, it's a myth.
Rollback Post to RevisionRollBack
Say something silly, Laugh 'til it hurts, Take a risk, Sing out loud, Rock the boat, Shake things up, Flirt with disaster, Buy something frivolous, Color outside the lines, Cause a scene, Order dessert, Make waves, Get carried away, Have a great day!
This is basically a repost; I gave an explanation of why newer versions have poorer performance despite having a lot more multithreading than older versions in your other thread (I have worse specs yet I can get 1000 FPS in 1.6.4, around 400 with chunk updates maxed out, despite only a single thread being used to process everything on the client), including a link to a bug report regarding reduced performance in 1.15:
Also, while it does mention changes to how chunk updates are balanced against FPS even with no updates occurring FPS is reduced due to changes to rendering (one comment says that they render using triangles instead of "quads", which have been deprecated in OpenGL for well over a decade (modern GPUs can't even natively render quads so the driver has to convert them into triangles), so each block face has 50% more data associated with it. My own tests by making 1.6.4 render with triangles confirms this (there is a setting that switches the rendering in the code, suggesting that Notch considered this as this was in the original "Notch" renderer); Mojang likely switched because triangles are the "proper" way to render, even if they don't give a performance improvement, at least not on NVIDIA hardware (which we both have and generally has better OpenGL support), much as they switched to VBO-only rendering (even though (also deprecated) display lists may be faster in some situations, including when you have a large amount of relatively static data, especially on NVIDIA). The same thinking is also responsible for various other performance degradations, e.g. "The general trend is that the developers do not care that much about memory allocation and use "best industry practices" without understanding the consequences").