I know OptiFine mod kinda uses multiple cores but the internal serve itself isn't multithreaded. Why is this?
Are there any mods that make Minecraft Multihtreaded?
Like, one idea would be to have each core handle a set amount of neighbouring chunks. That would also make minecraft more optimized for all kinds of computers.
Before you reply please know that i'm just curious and not an expert at all.
Because it is? Since 1.3 the game has run on two main threads, for the client and server, plus minor threads for things like file I/O, then since 1.8 they have greatly extended the amount of multithreading:
Each dimension (Overworld, Nether, End) run on separate threads. Chunk rendering and chunk rebuilds are now multi-threaded to speed them up. Mob pathfinding is now multi-threaded, to alleviate previous slow-downs associated with it.
https://bugs.mojang.com/browse/MC-128163 (note all the "worldgen-worker threads"; 1.13 multithreaded world generation, the biggest single performance bottleneck in the game)
In 1.14 light computation got moved off the main thread on the server. What does that mean for performance in comparison to 1.13?
Not that this has helped much due to Mojang's horrible coding practices; my heavily modded 1.6.4 instance uses almost no resources compared to modern versions, only 5-10% of what my computer can provide, even with minimal threading - even when flying around in Creative the server tick time is less than 10 ms, or 20% of the maximum - and that's in an extreme "Mega Forest" biome:
This is a graph from VisualVM of resource usage (not from the same time as above; this would be more representative of normal gameplay), render distance was set to 16 chunks:
This is a true testament to how much I've optimized the game, especially lighting, all without any use of multithreading beyond what vanilla 1.6.4 has, or any modern OpenGL/Java features (vanilla 1.6.4 only needs OpenGL 1.2 and Java 6 and I have not changed that); granted, performance could be improved but that would take a lot of work, which some people have actually undertaken, such as the "sodium" mod, which goes far beyond just using VBOs and things (which have been part of vanilla since 1.8, with versions as old as 1.7 supporting OpenGL 3 features if available; only since 1.17, which now entirely uses a modern rendering pipeline instead of the OpenGL 1 fixed-function pipeline, is OpenGL 3 mandatory).
Note also that there are 16 threads active on the chart above, despite the game only having two main threads; some of the threads are used by the JVM itself, especially for garbage collection, so even for older versions a quad-core CPU would be better than dual-core (obviously, other processes need CPU time as well).
That said, it does seem like the actual rendering (drawing to the screen) is not multithreaded due to issues with OpenGL, as is the uploading of rendered chunk data:
It's already mostly parallel. The only part that isn't is the part that directly talks to opengl and that's also the part that is no longer artificially capped on this pre-release. And it can't be reliably made parallel because opengl doesn't officially support multithreading. While it works on some GPU drivers, it breaks on others (like 1.7.10 optifine multi core chunk loading, it actually breaks on some hardware configurations)
As an illustration of how much of a bottleneck OpenGL is, I optimized the Java-side code ("render") to be 8 times faster when rendering Fancy leaves with smooth lighting but the overall improvement was less than twice as fast due to the time spent on uploading the vertex data to OpenGL ("draw"), which is where 1.8+ still bottlenecks, hence the following bug report (even with 8 threads you'd still have the same performance as making the Java-side code 8 times faster):
16x16x16 Fancy leaves in vanilla:
Chunk update took 35652 us (render: 16481, draw: 19170)
TMCW:
Chunk update took 20927 us (render: 1971, draw: 18955)
Also, it is quite disturbing that Mojang closed a bug report regarding performance issues in 1.15+ as "invalid", much as they did for similar issues after 1.8 (they just told you to get a new computer):
Also, I have no idea why they think 30 FPS is an acceptable minimum FPS target, - absolutely not! I'd rather have slow chunk updates, which aren't even noticeable most of the time, only when moving quickly (TMCW includes a "chunk update time" slider which changes the amount of time allotted to chunk updates. Optifine has something similar except it forces 1-5 chunk updates per frame even if it can't handle that many, thus FPS may be affected, while my version will do however many can be performed in the allotted time, down to a minimum of 1 in both cases. Also, prior to 1.8 Optifine had more options for chunk updates, including a "smooth" setting which splits a single chunk update into smaller pieces if it takes too long).
Likewise, vanilla can stand to have more quality options, such as separate controls for Fast and Fancy leaves (this is the single biggest difference between Fast and Fancy in terms of performance; unfortunately, due to a bug in 1.15+ (why on earth has this not been fixed yet?!) Fast has no effect since the game is still rendering the interior faces of leaves - things like this are why I think the development team is totally incompetent - I discovered a similar issue in my own mod (the culling of packed ice was broken due to underlying changes in block rendering, which is significant when you have biomes which are entirely packed ice below the surface) and released a fix for it within a day, same for various other bugs). Vanilla also used to have "fast" dropped items (apparently the 1.8+ rendering engine can no longer even render simple 2D sprites, even in the inventory, which are full 3D models, which for a "2D" item requires rendering at least 34 faces, a 16x16 crosshatch plus a front and back, instead of just one) and "fast" grass (this reduces face count from 2 to 1 per exposed side, which renders an overlay over the base texture on Fancy), which were both removed for some reason - any bit helps, sometimes a lot in specific circumstances.
Because it is? Since 1.3 the game has run on two main threads, for the client and server, plus minor threads for things like file I/O, then since 1.8 they have greatly extended the amount of multithreading:
Not that this has helped much due to Mojang's horrible coding practices; my heavily modded 1.6.4 instance uses almost no resources compared to modern versions, only 5-10% of what my computer can provide, even with minimal threading - even when flying around in Creative the server tick time is less than 10 ms, or 20% of the maximum - and that's in an extreme "Mega Forest" biome:
This is a graph from VisualVM of resource usage (not from the same time as above; this would be more representative of normal gameplay), render distance was set to 16 chunks:
This is a true testament to how much I've optimized the game, especially lighting, all without any use of multithreading beyond what vanilla 1.6.4 has, or any modern OpenGL/Java features (vanilla 1.6.4 only needs OpenGL 1.2 and Java 6 and I have not changed that); granted, performance could be improved but that would take a lot of work, which some people have actually undertaken, such as the "sodium" mod, which goes far beyond just using VBOs and things (which have been part of vanilla since 1.8, with versions as old as 1.7 supporting OpenGL 3 features if available; only since 1.17, which now entirely uses a modern rendering pipeline instead of the OpenGL 1 fixed-function pipeline, is OpenGL 3 mandatory).
Note also that there are 16 threads active on the chart above, despite the game only having two main threads; some of the threads are used by the JVM itself, especially for garbage collection, so even for older versions a quad-core CPU would be better than dual-core (obviously, other processes need CPU time as well).
That said, it does seem like the actual rendering (drawing to the screen) is not multithreaded due to issues with OpenGL, as is the uploading of rendered chunk data:
As an illustration of how much of a bottleneck OpenGL is, I optimized the Java-side code ("render") to be 8 times faster when rendering Fancy leaves with smooth lighting but the overall improvement was less than twice as fast due to the time spent on uploading the vertex data to OpenGL ("draw"), which is where 1.8+ still bottlenecks, hence the following bug report (even with 8 threads you'd still have the same performance as making the Java-side code 8 times faster):
Apple has even deprecated OpenGL with lack of multithreading/asynchronous processing cited as a major reason:
Also, it is quite disturbing that Mojang closed a bug report regarding performance issues in 1.15+ as "invalid", much as they did for similar issues after 1.8 (they just told you to get a new computer):
Also, I have no idea why they think 30 FPS is an acceptable minimum FPS target, - absolutely not! I'd rather have slow chunk updates, which aren't even noticeable most of the time, only when moving quickly (TMCW includes a "chunk update time" slider which changes the amount of time allotted to chunk updates. Optifine has something similar except it forces 1-5 chunk updates per frame even if it can't handle that many, thus FPS may be affected, while my version will do however many can be performed in the allotted time, down to a minimum of 1 in both cases. Also, prior to 1.8 Optifine had more options for chunk updates, including a "smooth" setting which splits a single chunk update into smaller pieces if it takes too long).
Likewise, vanilla can stand to have more quality options, such as separate controls for Fast and Fancy leaves (this is the single biggest difference between Fast and Fancy in terms of performance; unfortunately, due to a bug in 1.15+ (why on earth has this not been fixed yet?!) Fast has no effect since the game is still rendering the interior faces of leaves - things like this are why I think the development team is totally incompetent - I discovered a similar issue in my own mod (the culling of packed ice was broken due to underlying changes in block rendering, which is significant when you have biomes which are entirely packed ice below the surface) and released a fix for it within a day, same for various other bugs). Vanilla also used to have "fast" dropped items (apparently the 1.8+ rendering engine can no longer even render simple 2D sprites, even in the inventory, which are full 3D models, which for a "2D" item requires rendering at least 34 faces, a 16x16 crosshatch plus a front and back, instead of just one) and "fast" grass (this reduces face count from 2 to 1 per exposed side, which renders an overlay over the base texture on Fancy), which were both removed for some reason - any bit helps, sometimes a lot in specific circumstances.
I agree, 30 fps does suck for this game as it severely affects input from the player most of the time, it's also noticed from a visual perspective as well, even without the FPS counter, I bet my life on it that I could tell the difference between a constant 30fps vs 60fps.
I wouldn't be surprised that Minecraft does use 2 cores or more, but there is definitely at least a theoretical benefit to multithreading, especially when the jobs are divided between physical cores, as you well know, the saying is many hands make light work, and it's definitely true in the case of parallel processing when an application is designed to take advantage of more cores.
I don't know how OpenGL works, but I do know that it is an API for graphics processors, and the reason why OpenGL is used is simple, it is because of its compatibility, to my knowledge it's also one of the reasons why Minecraft Java edition is able to run on earlier versions of Windows. Had Minecraft been designed for Direct X 12 instead, it wouldn't work on PC's with video cards that didn't support DX12. Since Windows 7 support was dropped in 2020, any Direct X version that will come after 12 will not be supported on Windows 7, meaning OpenGL and legacy Direct X will be the only option left for some people, Java edition runs in OpenGL so that's good for Java edition players.
However if OpenGL is causing problems for multithreading as you say, then Microsoft/Mojang would be wise to abandon it for newer versions
of the game. It's only going to get worse as they keep adding more content to the game, I've even said although I am not a programmer, where as you are as you created your own mod, that quad core CPU's are preferred even when the game is only designed with 2 main threads, these days anybody playing Minecraft purchasing new hardware, would be wise to get a system with 6 CPU very fast cores such as those from AMD Ryzen if they could afford it, so they wouldn't need to upgrade in a long time.
This is a mod, that multithreads minecraft's tick execution. Crazy, I know. It's brought to you by the dark forces of coremodding, and works only as well as I've tested it
This mod alters the serverside processing (be it on a dedicated server or the single player integrated one) and dispaches all worlds, entities, tile entites and environment ticks to a thread pool, allowing for the parallel execution of minecraft. Under testing this shows about a 20+% improvement in tick times Enabled vs Disabled in single player. Note that uninstalled vs disabled appears to be similar but will have different time charicteristics as this does muck around with lots of the game's execution. In single player I've seen this be closer to 50% on average, and in multiplayer it can possibly be even higher.
Theoretically if this is developed enough (i.e. to the point of stability), it should allow more players in a single instance of a server; or more chunks to be loaded and processed in a modded game.
Note: This mod only mulithreads tick execution; it will probably not help with framerate or anything like that.
I don't know if it helps. I didn't notice a difference, however my pc is garbage. I have a quadcore but onboard graphics, which is BAD.
The worst thing about my pc is that i can't even upgrade on the graphic cards bc the mainboard is to old. I'll just use it untill it brakes.
Since i only play Minecraft, i don't need to buy a better pc right now.
Rollback Post to RevisionRollBack
My projects:
-are abandoned for now. I might pick 'em up in the future.
For now i'm working on a private modpack that suit's my own playstyle.
I am gonna stay in modded 1.12.2 untill my potato dies. No mercy! :Q
This is a mod, that multithreads minecraft's tick execution. Crazy, I know. It's brought to you by the dark forces of coremodding, and works only as well as I've tested it
This mod alters the serverside processing (be it on a dedicated server or the single player integrated one) and dispaches all worlds, entities, tile entites and environment ticks to a thread pool, allowing for the parallel execution of minecraft. Under testing this shows about a 20+% improvement in tick times Enabled vs Disabled in single player. Note that uninstalled vs disabled appears to be similar but will have different time charicteristics as this does muck around with lots of the game's execution. In single player I've seen this be closer to 50% on average, and in multiplayer it can possibly be even higher.
Theoretically if this is developed enough (i.e. to the point of stability), it should allow more players in a single instance of a server; or more chunks to be loaded and processed in a modded game.
Note: This mod only mulithreads tick execution; it will probably not help with framerate or anything like that.
I don't know if it helps. I didn't notice a difference, however my pc is garbage. I have a quadcore but onboard graphics, which is BAD.
The worst thing about my pc is that i can't even upgrade on the graphic cards bc the mainboard is to old. I'll just use it untill it brakes.
Since i only play Minecraft, i don't need to buy a better pc right now.
If you were going from a dual core to quad core you would notice a difference, as TMC explained earlier, your system needs extra resources to prevent the OS and other software from interfering with your game.
I do believe the claim that Minecraft uses 2 threads, not just from screenshots people provide, but because people generally seem to see an improvement when they move away from dual cores, except in situations where the dual core has much faster cores than the quad core. Having 2 cores for Minecraft, and the extra 2 for the operating system is logical imo. Cases where a quad core can be worse than a dual core, would be something like comparing a Raspberry Pi 4 to an Intel Comet Lake, a 10th gen i3 is still going to mob the floor with anything Pi computers can do.
and don't expect quad core CPU's to be enough forever either, game developers including Mojang are looking at designing their games to take advantage of more threads or parallel processing. Their official bedrock server app does support 8 threads which does support my claim here, and while it may not fully utilize them right now, in the future it will.
There's more to multithreading than rendering, if it's just a Minecraft server, then all it needs to do is manage I/O, calculations for redstone and keeping track of mobs. You don't need a powerful GPU for the server itself, just the client which is why you hardly see anyone use a server that uses a high end GPU, but it does make sense if at all possible, for Mojang to increase the number of threads, because even smaller servers can have problems with Minecraft if players are in separate areas of the world and causing mobs to spawn.
Chunk loading stand to benefit from extra CPU cores, at least in theory,
in practice though as TMC pointed out, it's complicated, and the game isn't coded as efficiently as it should be.
and in my opinion this may have something to do with the reason why they haven't designed powered rails to function at a longer distance
than other redstone builds, they could, but they would need to fix everything else first.
"When life gives you lemons, don't make lemonade - make life take the lemons back! Get mad! I don't want your lemons, what am I supposed to do with these? Demand to see life's manager."
why? they don't make minecraft, Mojang do, Microsoft just owns Mojang.
But yes Mojang would know the reasoning behind why the game isn't better optimized
It would be better if Minecraft servers were capable of using more than 2 threads properly. 2 threads is understandable for small servers, or servers that haven't been modded, but it kind of defeats the purpose for servers to support an almost unlimited player amount when the app isn't designed for that.
Not being able to take advantage of more than 2 threads can also cause small servers to lag if there's a lot of redstone and mob activity going on. There has to be a way that Mojang can design the server app so it can divide the workload on more cores if need be. As time goes on, dual core CPU's are becoming less relevant, there will come a time when dual core CPU's will be rare because of how outdated they are, and how much of a bottleneck they can create on modern computers. It's similar to how many newer computers now come with at least 8gb of RAM, instead of only 4 or 2gb.
It would be better if Minecraft servers were capable of using more than 2 threads properly.
As I've mentioned before, the game does use more than 2 threads, with a total of 16 threads in use when running singleplayer 1.6.4, which includes not just the game threads but JVM threads, especially parallel garbage collection, and as you know garbage collection is the single biggest reason why Java is seen as a bad language to develop real-time games on:
With a default memory limit of 1GB (1000 MB) and working memory of about 200 MB Java has to make a full garbage collection every 4 seconds otherwise it would run out of memory. When running with 60 fps, one frame takes about 16 ms. In order not to be noticeable, the garbage collection should run in 10-15 ms maximum. In this minimal time it has to decide which of the several hundred thausand newly generated objects are garbage and can be discarded and which are not. This is a huge amount of work and it needs a very powerful CPU in order to finish in 10 ms.
One solution to this is, you guessed it, multithreaded garbage collection, which is enabled by default in modern JVMs (this still doesn't excuse the design decisions that have lead newer versions to allocate so much memory).
Other than that, every dimension runs on its own thread, as does mob pathfinding (a major source of lag in the past; 1.6.x is particularly notorious for zombies causing crippling server lag when they can't reach a target, though the real fix for this, as I implemented, was to optimize the pathfinding code itself, not just move it to another thread), world generation (by far the biggest cause of performance issues and a main reason why many servers pregenerate chunks and set a world border), which appears to use many separate threads, in fact, one for each chunk being generated (search for "worldgen-worker" in this crash report, which includes a thread dump for each active thread - there are 20 "worldgen-worker" threads, which is quite ridiculous and probably actually hinders performance due to contention for resources (unless their CPU could handle that many threads, which I doubt seeing they have Windows 7, and even then the overall performance of a CPU is less than its singlethreaded performance times the number of cores due to CPU I/O and memory bandwidth). There are also threads for networking and file I/O (also very important, especially if you don't have a SSD or fast drive; even 1.6.4 has threaded file I/O for this reason).
As mentioned before though, 1.6.4 (with patches for zombie pathfinding and other optimizations) doesn't even need multilthreading because the code is so efficient, unless you want dozens of players on a server:
Near the top-left you can see "tick: C: 0.66 ms, S: 1.33 ms", which are the average client and server tick times - 1.33 ms is only 2.66% of the maximum allowed time of 50 ms (20 TPS), and even during extremely intensive world generation it is still well below the limit - 8.74 ms is still only 17.5% of the allowable time:
This is an example of the severe performance issues that mods that add huge trees face - except for my own, despite adding trees that are even bigger, and since 1.14 this should be much less of an issue in vanilla because as you may guess, lighting was multithreaded (prior to optimizing lighting it took over 30 seconds to generate a Superflat world set to Mega Forest; now it takes less than 5, all because of less time spent on light updates; generating the trees themselves is insignificant. And yes, that's with full light diffusion, unlike the mod mentioned below, which disables it by default and gets a TPS of only 4, or 250 ms tick time, with it enabled - that means that TMCW is around 30 times faster (assuming identical systems and render distance; at worst TMCW is still around 7-8 times faster if they had 2x CPU performance and 2x render distance; even the fastest CPUs available today are only about 2-3x faster than mine in singlethreaded performance):
In addition, by default, Conifer leaves don't diffuse light. While this is also configurable, enabling light diffusing is not recommended either because it causes mob spawns during the day, and also causes extreme lag. Generating a spawn within a Redwoods biome takes 5 seconds without light diffusing, and 40 seconds with light diffusing. With light diffusing enabled, there is also extreme worldgen lag in general, with the exploration of a single player dropping TPS to 4. So, keep your TPS high, and let leaves pass light through.
Of course, Mojang and most modders do not spend much time on optimizations, often just enough to reach a "playable state"; I still develop TMCW as if I still played on my first computer when I started playing Minecraft, which had mid-2000s hardware that would have absolutely no hope of even starting modern versions, and assume that only 512 MB of memory has been allocated. Ever wonder why the development of TMCWv5 has taken 4 years so far (my first mention is from March 2018; I did release an update, TMCWv4.5 that adds some of its features and major engine rewrites)? A lot of that time has been spent on optimizations and refactoring code (I originally had some quite hacky code to avoid directly modifying classes modified by Optifine to maintain compatibility, which I later dropped support for with subsequent code rewrites, then I rewrote the code a second time for the engine changes implemented in TMCWv4.5, which itself took a couple months to update everything to by essentially adding the features to TMCWv4.5) - it is clear why Mojang isn't going to spend that much time on optimizations because people want regular updates. Mind you, not all of that time has been spent working on TMCWv5; I spend little time on developing mods when I'm playing on a world, like the past few months, as I spend all my daily "Minecraft time" on one or the other).
As I've mentioned before, the game does use more than 2 threads, with a total of 16 threads in use when running singleplayer 1.6.4, which includes not just the game threads but JVM threads, especially parallel garbage collection, and as you know garbage collection is the single biggest reason why Java is seen as a bad language to develop real-time games on:
This is an example of the severe performance issues that mods that add huge trees face - except for my own, despite adding trees that are even bigger, and since 1.14 this should be much less of an issue in vanilla because as you may guess, lighting was multithreaded (prior to optimizing lighting it took over 30 seconds to generate a Superflat world set to Mega Forest; now it takes less than 5, all because of less time spent on light updates; generating the trees themselves is insignificant. And yes, that's with full light diffusion, unlike the mod mentioned below, which disables it by default and gets a TPS of only 4, or 250 ms tick time, with it enabled - that means that TMCW is around 30 times faster (assuming identical systems and render distance; at worst TMCW is still around 7-8 times faster if they had 2x CPU performance and 2x render distance; even the fastest CPUs available today are only about 2-3x faster than mine in singlethreaded performance):
There's not a lot they can do about that, there's only so much performance a single core can be improved upon, if clock speeds are increased too much as is the case when overclocking, then the CPU overheats and CPU's tend to need more energy at higher frequencies. Also there is a physical limit to the number of transistors that can be fit onto a chip, even increasing performance that way has a limit. Multicore CPU's exist because of these limits. As you are fully aware, even hyperthreaded cores don't have unlimited processing power, although they are more efficient than cores that are not hyperthreaded.
This is the whole reason why dividing up the workload on multiple CPU cores is important. I do agree with your point that garbage collection can benefit from this but there's so much more in the game than refreshing cache that multithreading can be applied to, like ticking of redstone and mob AI. You've said it yourself that mob AI pathfinding could run on its own thread, as well as every dimension. The other solution would be to have anything related to player data and chunk loading handled by its own CPU core, independently from the rest, if it can be done, this way servers with dozens of players will run efficiently and will not experience problems with lag spikes nearly as often unless their internet or network connection is too slow or has high latency, although it's doubtful latency issues will be experienced on a private home network through an ethernet connection in a LAN party.
The limits of single threaded performance is the entire reason why the issue of multithreading gets brought up.
Obviously you don't need multithreading for things like music players, photo viewing or word documents etc, but some applications are too advanced for even a dual core CPU to safely run on. Minecraft can run on dual core CPU's, but you may experience problems with it unless a separate system is handling the dedicated server application, it's not recommended to run the server app on the same system you're gaming on.
You've said it yourself that mob AI pathfinding could run on its own thread, as well as every dimension
No, I've said that this is already the case ever since 1.8 - not "could" but "does", and they appear to use many separate threads (as I pointed out before, 1.13+ may use upwards of 20 separate threads for world generation, which would be one thread for each chunk; obviously, this would not be the case for mobs as having hundreds of threads would do no good; even 20 is excessive unless you have Ryzen Threadripper or similar):
Each dimension (Overworld, Nether, End) run on separate threads.
This makes it so that the performance in one dimension, is independent of the performance in all others. Chunk rendering and chunk rebuilds are now multi-threaded to speed them up. Mob pathfinding is now multi-threaded, to alleviate previous slow-downs associated with it.
But somehow it doesn't seem to help because of Mojang's atrocious coding practices, as I've tried explaining countless times, along with examples of how my own modded version runs literally dozens of times faster despite not using multithreading (beyond a server and client thread, plus file I/O and GC). Here is yet another example, from back when I had my old computer; 1.6.4 (this was long before any of my large-scale refactorings so this was pretty much vanilla performance) had a tick time of 8 ms while 1.8 was around 70 ms with constant server lag as result, even at a low render distance and not moving round - that's how much more resource-intensive the game became over just two updates:
This comment on a thread about lag in 1.8 pretty much sums it up; Mojang seem to be using industry-standard/taught programming practices, not the ones you need for a real-time game:
The techniques taught in programming classes are correct... for some applications. Even most applications in the world of software design, arguably. However, real-time applications are a niche area with completely different approaches and constraints. And video-games are basically real-time applications. It seems like programming education needs to have more of a focus on real-time design, and how it differs from the "standard" software engineering approaches.
This comment by the creator of Optifine applies to the game in general:
The old Notch code was straightforward and relatively easy to follow. The new rendering system is an over-engineered monster full of factories, builders, bakeries, baked items, managers, dispatchers, states, enums and layers. Object allocation is rampant, small objects are allocated like there is no tomorrow. No wonder that the garbage collector has to work so hard.
The multithreaded chunk loading is crude and it will need a lot of optimizations in order to behave properly. Currently it works best with multi-core systems, quad-core is optimal, dual-core suffers a bit and single-core CPUs are practically doomed with vanilla. Lag spikes are present with all types of CPU.
Also, most of the size increase of the Minecraft jar since 1.8 is due to such coding practices, not actual new content; all my own content adds up to a fraction of the size of the equivalent number of features added since 1.6.4 (interestingly, the size of the Minecraft jar decreased between 1.5 and 1.6; in fact, 1.7.10 is still smaller than 1.5.2 despite having more content). Bigger code is by itself not necessarily bad but it is indicative of increased complexity to do the same things, as I demonstrate in this post, and puts more pressure on memory bandwidth and the CPU cache (effective use of the latter is extremely important due to how much faster CPUs have become compared to system memory, which can take 100 or more cycles to access; likewise, 1.8+'s memory allocation rate has to be bad for caching and system memory bandwidth).
It would require an entire rewrite of the code. All mods/plugins would stop working because the core infrastructure would of changed. They're kind of stuck with it to my knowledge. However nothing stops you from using "Minestom" which allows you to code Minecraft from scratch, thus allowing you to make a server multi-threaded. Also, I believe Mojang/Microsoft doesn't really care for the Minecraft Java community as much as it does for Bedrock. Not sure if Bedrock is multi-threaded either.
Correct me if I'm wrong but it is my basic understanding.
It would require an entire rewrite of the code. All mods/plugins would stop working because the core infrastructure would of changed. They're kind of stuck with it to my knowledge. However nothing stops you from using "Minestom" which allows you to code Minecraft from scratch, thus allowing you to make a server multi-threaded. Also, I believe Mojang/Microsoft doesn't really care for the Minecraft Java community as much as it does for Bedrock. Not sure if Bedrock is multi-threaded either.
Correct me if I'm wrong but it is my basic understanding.
But the game not being optimized with multithreading probably has something to do with why you can get massive slow downs with garbage collection, because the more data the processor has to work with it makes sense that it would be harder for it to do. When playing around with Java edition I just leave it with the default setting of 2gb allocated, because allocating more doesn't seem to help.
Because I don't like the ore generation in Caves and Cliffs update, I may have to go back to Java edition for my survival mode playthroughs again, and no longer play on bedrock edition. This is because bedrock edition doesn't allow you to lock it to an older version, for example Nether update 1.16.
This also means I am going to lose the benefits of bedrock edition, and to get it to run smoothly I'd have to improvise and run it with 16 chunks render distance instead of 32. It's possible Microsoft may kill off Java edition someday, but it's more likely that they're just going to make bedrock edition even worse with their updates than that happening, so I'm taking a calculated risk here. My reasoning is if their intention was to end Java edition they wouldn't have bothered with the account migration, but it's clear for the time being they intend on keeping Java edition going.
It would require an entire rewrite of the code. All mods/plugins would stop working because the core infrastructure would of changed.
Not like this hasn't happened several times before - wonder why so many mods never updated past 1.7.10? Mojang completely rewrote the rendering engine, supposedly to make it better (the only real benefit is it is easier to add new block models; previously, as I do myself, you had to hardcode them in, though most blocks used the same "renderCuboidBlock" method with different dimensions specified it wasn't as complex as you might think, e.g. fences render 1-5 cuboid shapes by calling the method once for each piece). A similar thing happened after 1.12.2 due to world generation being rewritten, then another rewrite of the rendering engine occurred in 1.15, then 1.17, and so on - most of the code is completely different from what it was prior to 1.8, or better yet, 1.7 (1.7 started the transition away from using numerical IDs; instead of code like "if (id == Block.stone.blockID)" you now had "if (block == Block.stone)". For comparison, my own code looks like "if (id == BlockStates.stone)", where "BlockStates.stone" is a static final int constant set to 1, the numerical ID for stone, and is inlined at compile time to "if (id == 1)", eliminating the need to reference an object; likewise, "BlockStates.stone_granite" has the value of 257, or 1 + 1 * 256; note that 1.8+ uses "block state" objects but they are only similar in name).
Also, the Wiki is far behind if it claims that only since 1.18 is world generation multithreaded, because it has been since 1.13, as evidenced by thread dumps from crash reports; oddly, the page for 1.13 makes no mention of it, and otherwise the only mentions I can find are on various forums, such as this page; "Worldgen is now its own separate thread(s)."; and my own replies on these forums (this isn't the first time I've seen the Wiki make claims about a "new" feature which is actually older).
Not like this hasn't happened several times before - wonder why so many mods never updated past 1.7.10? Mojang completely rewrote the rendering engine, supposedly to make it better (the only real benefit is it is easier to add new block models; previously, as I do myself, you had to hardcode them in, though most blocks used the same "renderCuboidBlock" method with different dimensions specified it wasn't as complex as you might think, e.g. fences render 1-5 cuboid shapes by calling the method once for each piece). A similar thing happened after 1.12.2 due to world generation being rewritten, then another rewrite of the rendering engine occurred in 1.15, then 1.17, and so on - most of the code is completely different from what it was prior to 1.8, or better yet, 1.7 (1.7 started the transition away from using numerical IDs; instead of code like "if (id == Block.stone.blockID)" you now had "if (block == Block.stone)". For comparison, my own code looks like "if (id == BlockStates.stone)", where "BlockStates.stone" is a static final int constant set to 1, the numerical ID for stone, and is inlined at compile time to "if (id == 1)", eliminating the need to reference an object; likewise, "BlockStates.stone_granite" has the value of 257, or 1 + 1 * 256; note that 1.8+ uses "block state" objects but they are only similar in name).
Also, the Wiki is far behind if it claims that only since 1.18 is world generation multithreaded, because it has been since 1.13, as evidenced by thread dumps from crash reports; oddly, the page for 1.13 makes no mention of it, and otherwise the only mentions I can find are on various forums, such as this page; "Worldgen is now its own separate thread(s)."; and my own replies on these forums (this isn't the first time I've seen the Wiki make claims about a "new" feature which is actually older).
Yet for some reason the game still runs like crap without Optifine, as shown in AntVenom's own tests.
It's evident to all of us that Mojang's coding practices are bad, and for a game that supposedly uses multithreading it sure doesn't make good use of those extra CPU cores. Without Optifine you're often forced to use low render distances to compensate for Java edition's terrible rendering performance and a chunk loading system that isn't very efficient which is why constant lag spikes are a problem for many people. You agreed that 30fps is unplayable, but that's exactly what ends up happening on my machine with 32 chunk render distance despite having a machine that exceeds the recommended requirements officially put out by Mojang. In bedrock edition neither me nor my friends have this problem.
Yet for some reason the game still runs like crap without Optifine, as shown in AntVenom's own tests.
Optifine isn't even that good compared to real optimization mods, like Sodium, which claims performance increases of up to 10-fold; the example screenshot shows FPS increasing from 35 to 478 at a render distance of 32 chunks on a system that is close to a decade old, with a CPU that had relatively poor performance in the day (this was back when AMD CPUs were inferior to Intel, particularly the FX series they used):
Sodium is a free and open-source rendering engine replacement for the Minecraft client that greatly improves frame rates, reduces micro-stutter, and fixes graphical issues in Minecraft. It boasts wide compatibility with the Fabric mod ecosystem when compared to other mods and doesn't compromise on how the game looks, giving you that authentic block game feel.
If you're coming from OptiFine, you can generally expect a significant improvement to performance over it, especially when combined with our other optimization mods. The Fabric community has also built many alternative mods which implement popular features from OptiFine, such as "Connected Textures".
They even included some of the fixes that I've added to my own mods:
Many graphical fixes for smooth lighting effects, making the game run better while still applying a healthy amount of optimization. For example, take this before and after of a white concrete room in vanilla, or this comparison while underwater.
Smooth lighting for fluids and other special blocks. (comparison)
Oddly enough, Bedrock appears to have had smooth lighting on water for a long time, yet they haven't added it to Java yet; as with many other things I have no idea why they haven't at least fixed the bugs (the underwater one was fixed in 1.17), despite bug reports spanning back nearly a decade (this one is nearly 8 years old, and as mentioned here every version with smooth lighting is broken) - with code examples of how to fix it (even if they have changed the rendering code since 1.7 surely it is similar enough that the same incorrect offsets are being used).
Another mod claims to achieve significantly greater performance than Bedrock, allowing for render distances of up to 256 chunks (I assume, they didn't give units but surely it isn't in blocks); both of these make any "optimizations" that I've made look like nothing at all - I don't even support a render distance higher than 16 chunks and steady-state FPS isn't much higher than in vanilla, with improvements to FPS and lag spikes during chunk updates being the main difference due to improved update scheduling; likewise, back when I used Optifine it had little impact on steady-state FPS on several different computers (so it couldn't have just been a particular setup; I believe Optifine's claim of "2x FPS comes from back when it added OpenGL occlusion culling, which did double my FPS, but it has been included in vanilla for a long time and Optifine only makes some minor improvements to how it works); by contrast, these mods significantly increase FPS and use modern OpenGL, not fixed-function stuff from the late 1990s which has to be emulated on modern systems (yes, 1.6.4 only needs OpenGL 1.2, released in 1998 and deprecated since 2008; I have not changed this):
You should see vastly improved frame rates and world render speeds, especially at large view distances. It comes with most of the usual improvements from the past, but also some major new work to simulation (server side), memory usage and startup. In my own tests FC2 could easily outperform everything else, including the W10 edition (MCPE/Bedrock).
The config allows increasing the view distance limit up to 256. Performance will likely tank once the game runs out of VRAM.
All rendering happens with full detail and for a view distance setting of d FC2 will render (d*2+1)2 chunks, some bugs around post-initial area and very large distances aside. The whole view distance is being simulated (ticked). MCPE doesn't do either.
My test environments are only a Nvidia GTX 780 and an Intel HD 2000, both with Intel quad core CPUs, your experience on other platforms may vary and is of elevated interest.
Note that an Intel HD 2000 can't even run the latest version (1.17 requires OpenGL 3.2, which is only supported by HD 4000 and later, and the GTX 780 is over 8 years old.
I think at some point I'm going to switch to TMC's mod and only play on that. 1.18 might well be the version to make me do that with the ore rarity.
It's pretty clear Mojang just follows the buy-new path on required specs for the game. "What, you can't afford a new computer? All true players do that" (snobbishness intensifies)
it is clear why Mojang isn't going to spend that much time on optimizations because people want regular updates.
This is exactly why, and also why they didn't release Caves and Cliffs as a single update that would've been fully released around this time. But even in splitting the update, people still aren't happy (I still see complaints everywhere that "1.18 adds nothing," and it's being treated as a complaint). Although Mojang's practices here are definitely terrible (the game has run worse with every update), there is definitely something to be said about its fanbase. Mojang has conditioned its players to expect regular huge content updates every 6-9 months, and by doing so it is immediately considered disappointing if they were to put out an "optimization update", for example (like 1.15). It's frustrating because they're ultimately going to do what they think the fanbase wants (though maybe not perfectly), because they're a business after all; yet this negatively impacts the game which just gets more bloated with each update.
It would be great if they spent an entire update focused on optimization and refactoring, not on "new stuff." But they're not going to do that because in their minds, it will be disappointing to players who consistently expect bigger and better every single time. This stems from the culture of instant gratification that has consumed most facets of modern society, such that we (speaking generally) expect the latest and greatest in the shortest amount of time, and the idea of patience and waiting causes us to completely lose interest (my laptop which I've only had for four years is considered old now, the same way old iPhone versions are considered obsolete immediately when a new one comes out, which is just absurd to me).
I think Mojang fears that should they do an optimization update, this is exactly what would happen - people would suddenly get bored, because that's what happens every time not even a month after a new update drops. Not even a day will go by after November 30th, and people will start talking about 1.19, which will put immense pressure on Mojang to churn out snapshots before people get bored. It's really unfortunate, but also Mojang conditioned players to be this way especially with the new tradition of announcing future updates at Minecon Live before the current one is even released, with no discussions on optimization at all. What I want to see is the fanbase getting on the same page about what's necessary in terms of optimization and actually improving what is already here, but that's never going to happen (I would love to be wrong though).
Rollback Post to RevisionRollBack
LP series? Not my style! Video series? Closer, but not quite. Survival journal, maybe? That's better. Now in Season 4 of the Legends of Quintropolis Journal (<< click to view)!! World download and more can be found there.
This is exactly why, and also why they didn't release Caves and Cliffs as a single update that would've been fully released around this time. But even in splitting the update, people still aren't happy (I still see complaints everywhere that "1.18 adds nothing," and it's being treated as a complaint). Although Mojang's practices here are definitely terrible (the game has run worse with every update), there is definitely something to be said about its fanbase. Mojang has conditioned its players to expect regular huge content updates every 6-9 months, and by doing so it is immediately considered disappointing if they were to put out an "optimization update", for example (like 1.15). It's frustrating because they're ultimately going to do what they think the fanbase wants (though maybe not perfectly), because they're a business after all; yet this negatively impacts the game which just gets more bloated with each update.
It would be great if they spent an entire update focused on optimization and refactoring, not on "new stuff." But they're not going to do that because in their minds, it will be disappointing to players who consistently expect bigger and better every single time. This stems from the culture of instant gratification that has consumed most facets of modern society, such that we (speaking generally) expect the latest and greatest in the shortest amount of time, and the idea of patience and waiting causes us to completely lose interest (my laptop which I've only had for four years is considered old now, the same way old iPhone versions are considered obsolete immediately when a new one comes out, which is just absurd to me).
I think Mojang fears that should they do an optimization update, this is exactly what would happen - people would suddenly get bored, because that's what happens every time not even a month after a new update drops. Not even a day will go by after November 30th, and people will start talking about 1.19, which will put immense pressure on Mojang to churn out snapshots before people get bored. It's really unfortunate, but also Mojang conditioned players to be this way especially with the new tradition of announcing future updates at Minecon Live before the current one is even released, with no discussions on optimization at all. What I want to see is the fanbase getting on the same page about what's necessary in terms of optimization and actually improving what is already here, but that's never going to happen (I would love to be wrong though).
And so they can get the most use out of the computers they have now before they do upgrade. I don't like it when updates are rushed just for the sake of pleasing fans. I'd much rather optimizations be done before the update was released so people didn't end up with as many lag spikes. Lag can spoil a gamers fun just as much as a game that is repetitive or boring, and because of all the updates they're rushing out of the door with newer content, they're not patching existing problems that exist with the game like the broken world generation in bedrock edition where if you go millions of blocks out from the center in Overworld, the farlands haven't been patched to function more like the regular Overworld terrain and physics, so players fall right through them and die in the Void, if this were patched and an invisible barrier were placed at 30 million X to Z, bedrock edition worlds would function more like Java ones.
I know OptiFine mod kinda uses multiple cores but the internal serve itself isn't multithreaded. Why is this?
Are there any mods that make Minecraft Multihtreaded?
Like, one idea would be to have each core handle a set amount of neighbouring chunks. That would also make minecraft more optimized for all kinds of computers.
Before you reply please know that i'm just curious and not an expert at all.
I'm not sure what to make of it myself
I looked at the server properties file of bedrock edition server software and it appears to support 8 threads,
whether or not it actually takes full advantage of those is another story. After scrolling through different forums
I can't seem to pin down a definitive answer just how many threads Java servers use, although the majority of posts
seem to suggest Minecraft Java servers use at least 2 cores, 3 at most, with one core doing the chunk loading and unloading,
I am aware that Optifine did add multicore support for modded versions of old Minecraft though.
However, if Minecraft Java servers do in fact use 2 cores, it's still wise to use a system with at least a quad core CPU,
or a dual core that is multithreaded, in my opinion. Why? because the operating system and other processes
need extra CPU resources to work with. You've got to factor in everything that is running on your machine
to make an informed decision about this. I use a gaming desktop and even my quad core gets pressured
by Minecraft sometimes, but again as I said earlier, there's other stuff running in the background.
You're right though, at least in theory Minecraft servers should benefit from having multiple cores to run on.
Can't they design each core to handle individual players if need be? naturally the more players there are on the server,
and in different areas of the world, and if they breed a lot of animals or villagers etc, the more demanding it's going to be.
Because it is? Since 1.3 the game has run on two main threads, for the client and server, plus minor threads for things like file I/O, then since 1.8 they have greatly extended the amount of multithreading:
Not that this has helped much due to Mojang's horrible coding practices; my heavily modded 1.6.4 instance uses almost no resources compared to modern versions, only 5-10% of what my computer can provide, even with minimal threading - even when flying around in Creative the server tick time is less than 10 ms, or 20% of the maximum - and that's in an extreme "Mega Forest" biome:
This is a graph from VisualVM of resource usage (not from the same time as above; this would be more representative of normal gameplay), render distance was set to 16 chunks:
This is a true testament to how much I've optimized the game, especially lighting, all without any use of multithreading beyond what vanilla 1.6.4 has, or any modern OpenGL/Java features (vanilla 1.6.4 only needs OpenGL 1.2 and Java 6 and I have not changed that); granted, performance could be improved but that would take a lot of work, which some people have actually undertaken, such as the "sodium" mod, which goes far beyond just using VBOs and things (which have been part of vanilla since 1.8, with versions as old as 1.7 supporting OpenGL 3 features if available; only since 1.17, which now entirely uses a modern rendering pipeline instead of the OpenGL 1 fixed-function pipeline, is OpenGL 3 mandatory).
Note also that there are 16 threads active on the chart above, despite the game only having two main threads; some of the threads are used by the JVM itself, especially for garbage collection, so even for older versions a quad-core CPU would be better than dual-core (obviously, other processes need CPU time as well).
That said, it does seem like the actual rendering (drawing to the screen) is not multithreaded due to issues with OpenGL, as is the uploading of rendered chunk data:
As an illustration of how much of a bottleneck OpenGL is, I optimized the Java-side code ("render") to be 8 times faster when rendering Fancy leaves with smooth lighting but the overall improvement was less than twice as fast due to the time spent on uploading the vertex data to OpenGL ("draw"), which is where 1.8+ still bottlenecks, hence the following bug report (even with 8 threads you'd still have the same performance as making the Java-side code 8 times faster):
MC-123584 Updating blocks creates lag spikes proportional to geometry in chunk section (this has always been an issue but has become worse in newer versions)
Apple has even deprecated OpenGL with lack of multithreading/asynchronous processing cited as a major reason:
Also, it is quite disturbing that Mojang closed a bug report regarding performance issues in 1.15+ as "invalid", much as they did for similar issues after 1.8 (they just told you to get a new computer):
MC-164123 Poor FPS performance with new rendering engine
Also, I have no idea why they think 30 FPS is an acceptable minimum FPS target, - absolutely not! I'd rather have slow chunk updates, which aren't even noticeable most of the time, only when moving quickly (TMCW includes a "chunk update time" slider which changes the amount of time allotted to chunk updates. Optifine has something similar except it forces 1-5 chunk updates per frame even if it can't handle that many, thus FPS may be affected, while my version will do however many can be performed in the allotted time, down to a minimum of 1 in both cases. Also, prior to 1.8 Optifine had more options for chunk updates, including a "smooth" setting which splits a single chunk update into smaller pieces if it takes too long).
Likewise, vanilla can stand to have more quality options, such as separate controls for Fast and Fancy leaves (this is the single biggest difference between Fast and Fancy in terms of performance; unfortunately, due to a bug in 1.15+ (why on earth has this not been fixed yet?!) Fast has no effect since the game is still rendering the interior faces of leaves - things like this are why I think the development team is totally incompetent - I discovered a similar issue in my own mod (the culling of packed ice was broken due to underlying changes in block rendering, which is significant when you have biomes which are entirely packed ice below the surface) and released a fix for it within a day, same for various other bugs). Vanilla also used to have "fast" dropped items (apparently the 1.8+ rendering engine can no longer even render simple 2D sprites, even in the inventory, which are full 3D models, which for a "2D" item requires rendering at least 34 faces, a 16x16 crosshatch plus a front and back, instead of just one) and "fast" grass (this reduces face count from 2 to 1 per exposed side, which renders an overlay over the base texture on Fancy), which were both removed for some reason - any bit helps, sometimes a lot in specific circumstances.
TheMasterCaver's First World - possibly the most caved-out world in Minecraft history - includes world download.
TheMasterCaver's World - my own version of Minecraft largely based on my views of how the game should have evolved since 1.6.4.
Why do I still play in 1.6.4?
I agree, 30 fps does suck for this game as it severely affects input from the player most of the time, it's also noticed from a visual perspective as well, even without the FPS counter, I bet my life on it that I could tell the difference between a constant 30fps vs 60fps.
I wouldn't be surprised that Minecraft does use 2 cores or more, but there is definitely at least a theoretical benefit to multithreading, especially when the jobs are divided between physical cores, as you well know, the saying is many hands make light work, and it's definitely true in the case of parallel processing when an application is designed to take advantage of more cores.
I don't know how OpenGL works, but I do know that it is an API for graphics processors, and the reason why OpenGL is used is simple, it is because of its compatibility, to my knowledge it's also one of the reasons why Minecraft Java edition is able to run on earlier versions of Windows. Had Minecraft been designed for Direct X 12 instead, it wouldn't work on PC's with video cards that didn't support DX12. Since Windows 7 support was dropped in 2020, any Direct X version that will come after 12 will not be supported on Windows 7, meaning OpenGL and legacy Direct X will be the only option left for some people, Java edition runs in OpenGL so that's good for Java edition players.
However if OpenGL is causing problems for multithreading as you say, then Microsoft/Mojang would be wise to abandon it for newer versions
of the game. It's only going to get worse as they keep adding more content to the game, I've even said although I am not a programmer, where as you are as you created your own mod, that quad core CPU's are preferred even when the game is only designed with 2 main threads, these days anybody playing Minecraft purchasing new hardware, would be wise to get a system with 6 CPU very fast cores such as those from AMD Ryzen if they could afford it, so they wouldn't need to upgrade in a long time.
I found a mod that does something in that direction i assume. It's MCMT.
https://www.curseforge.com/minecraft/mc-mods/mcmt-multithreading
This description is from the page:
This is a mod, that multithreads minecraft's tick execution. Crazy, I know. It's brought to you by the dark forces of coremodding, and works only as well as I've tested it
This mod alters the serverside processing (be it on a dedicated server or the single player integrated one) and dispaches all worlds, entities, tile entites and environment ticks to a thread pool, allowing for the parallel execution of minecraft. Under testing this shows about a 20+% improvement in tick times Enabled vs Disabled in single player. Note that uninstalled vs disabled appears to be similar but will have different time charicteristics as this does muck around with lots of the game's execution. In single player I've seen this be closer to 50% on average, and in multiplayer it can possibly be even higher.
Theoretically if this is developed enough (i.e. to the point of stability), it should allow more players in a single instance of a server; or more chunks to be loaded and processed in a modded game.
Note: This mod only mulithreads tick execution; it will probably not help with framerate or anything like that.
I don't know if it helps. I didn't notice a difference, however my pc is garbage. I have a quadcore but onboard graphics, which is BAD.
The worst thing about my pc is that i can't even upgrade on the graphic cards bc the mainboard is to old. I'll just use it untill it brakes.
Since i only play Minecraft, i don't need to buy a better pc right now.
My projects:
-are abandoned for now. I might pick 'em up in the future.
For now i'm working on a private modpack that suit's my own playstyle.
I am gonna stay in modded 1.12.2 untill my potato dies. No mercy! :Q
If you were going from a dual core to quad core you would notice a difference, as TMC explained earlier, your system needs extra resources to prevent the OS and other software from interfering with your game.
I do believe the claim that Minecraft uses 2 threads, not just from screenshots people provide, but because people generally seem to see an improvement when they move away from dual cores, except in situations where the dual core has much faster cores than the quad core. Having 2 cores for Minecraft, and the extra 2 for the operating system is logical imo. Cases where a quad core can be worse than a dual core, would be something like comparing a Raspberry Pi 4 to an Intel Comet Lake, a 10th gen i3 is still going to mob the floor with anything Pi computers can do.
and don't expect quad core CPU's to be enough forever either, game developers including Mojang are looking at designing their games to take advantage of more threads or parallel processing. Their official bedrock server app does support 8 threads which does support my claim here, and while it may not fully utilize them right now, in the future it will.
There's more to multithreading than rendering, if it's just a Minecraft server, then all it needs to do is manage I/O, calculations for redstone and keeping track of mobs. You don't need a powerful GPU for the server itself, just the client which is why you hardly see anyone use a server that uses a high end GPU, but it does make sense if at all possible, for Mojang to increase the number of threads, because even smaller servers can have problems with Minecraft if players are in separate areas of the world and causing mobs to spawn.
Chunk loading stand to benefit from extra CPU cores, at least in theory,
in practice though as TMC pointed out, it's complicated, and the game isn't coded as efficiently as it should be.
and in my opinion this may have something to do with the reason why they haven't designed powered rails to function at a longer distance
than other redstone builds, they could, but they would need to fix everything else first.
Ask Microsoft
It would be better if Minecraft servers were capable of using more than 2 threads properly. 2 threads is understandable for small servers, or servers that haven't been modded, but it kind of defeats the purpose for servers to support an almost unlimited player amount when the app isn't designed for that.
Not being able to take advantage of more than 2 threads can also cause small servers to lag if there's a lot of redstone and mob activity going on. There has to be a way that Mojang can design the server app so it can divide the workload on more cores if need be. As time goes on, dual core CPU's are becoming less relevant, there will come a time when dual core CPU's will be rare because of how outdated they are, and how much of a bottleneck they can create on modern computers. It's similar to how many newer computers now come with at least 8gb of RAM, instead of only 4 or 2gb.
As I've mentioned before, the game does use more than 2 threads, with a total of 16 threads in use when running singleplayer 1.6.4, which includes not just the game threads but JVM threads, especially parallel garbage collection, and as you know garbage collection is the single biggest reason why Java is seen as a bad language to develop real-time games on:
One solution to this is, you guessed it, multithreaded garbage collection, which is enabled by default in modern JVMs (this still doesn't excuse the design decisions that have lead newer versions to allocate so much memory).
Other than that, every dimension runs on its own thread, as does mob pathfinding (a major source of lag in the past; 1.6.x is particularly notorious for zombies causing crippling server lag when they can't reach a target, though the real fix for this, as I implemented, was to optimize the pathfinding code itself, not just move it to another thread), world generation (by far the biggest cause of performance issues and a main reason why many servers pregenerate chunks and set a world border), which appears to use many separate threads, in fact, one for each chunk being generated (search for "worldgen-worker" in this crash report, which includes a thread dump for each active thread - there are 20 "worldgen-worker" threads, which is quite ridiculous and probably actually hinders performance due to contention for resources (unless their CPU could handle that many threads, which I doubt seeing they have Windows 7, and even then the overall performance of a CPU is less than its singlethreaded performance times the number of cores due to CPU I/O and memory bandwidth). There are also threads for networking and file I/O (also very important, especially if you don't have a SSD or fast drive; even 1.6.4 has threaded file I/O for this reason).
As mentioned before though, 1.6.4 (with patches for zombie pathfinding and other optimizations) doesn't even need multilthreading because the code is so efficient, unless you want dozens of players on a server:
Near the top-left you can see "tick: C: 0.66 ms, S: 1.33 ms", which are the average client and server tick times - 1.33 ms is only 2.66% of the maximum allowed time of 50 ms (20 TPS), and even during extremely intensive world generation it is still well below the limit - 8.74 ms is still only 17.5% of the allowable time:
This is an example of the severe performance issues that mods that add huge trees face - except for my own, despite adding trees that are even bigger, and since 1.14 this should be much less of an issue in vanilla because as you may guess, lighting was multithreaded (prior to optimizing lighting it took over 30 seconds to generate a Superflat world set to Mega Forest; now it takes less than 5, all because of less time spent on light updates; generating the trees themselves is insignificant. And yes, that's with full light diffusion, unlike the mod mentioned below, which disables it by default and gets a TPS of only 4, or 250 ms tick time, with it enabled - that means that TMCW is around 30 times faster (assuming identical systems and render distance; at worst TMCW is still around 7-8 times faster if they had 2x CPU performance and 2x render distance; even the fastest CPUs available today are only about 2-3x faster than mine in singlethreaded performance):
Of course, Mojang and most modders do not spend much time on optimizations, often just enough to reach a "playable state"; I still develop TMCW as if I still played on my first computer when I started playing Minecraft, which had mid-2000s hardware that would have absolutely no hope of even starting modern versions, and assume that only 512 MB of memory has been allocated. Ever wonder why the development of TMCWv5 has taken 4 years so far (my first mention is from March 2018; I did release an update, TMCWv4.5 that adds some of its features and major engine rewrites)? A lot of that time has been spent on optimizations and refactoring code (I originally had some quite hacky code to avoid directly modifying classes modified by Optifine to maintain compatibility, which I later dropped support for with subsequent code rewrites, then I rewrote the code a second time for the engine changes implemented in TMCWv4.5, which itself took a couple months to update everything to by essentially adding the features to TMCWv4.5) - it is clear why Mojang isn't going to spend that much time on optimizations because people want regular updates. Mind you, not all of that time has been spent working on TMCWv5; I spend little time on developing mods when I'm playing on a world, like the past few months, as I spend all my daily "Minecraft time" on one or the other).
TheMasterCaver's First World - possibly the most caved-out world in Minecraft history - includes world download.
TheMasterCaver's World - my own version of Minecraft largely based on my views of how the game should have evolved since 1.6.4.
Why do I still play in 1.6.4?
There's not a lot they can do about that, there's only so much performance a single core can be improved upon, if clock speeds are increased too much as is the case when overclocking, then the CPU overheats and CPU's tend to need more energy at higher frequencies. Also there is a physical limit to the number of transistors that can be fit onto a chip, even increasing performance that way has a limit. Multicore CPU's exist because of these limits. As you are fully aware, even hyperthreaded cores don't have unlimited processing power, although they are more efficient than cores that are not hyperthreaded.
This is the whole reason why dividing up the workload on multiple CPU cores is important. I do agree with your point that garbage collection can benefit from this but there's so much more in the game than refreshing cache that multithreading can be applied to, like ticking of redstone and mob AI. You've said it yourself that mob AI pathfinding could run on its own thread, as well as every dimension. The other solution would be to have anything related to player data and chunk loading handled by its own CPU core, independently from the rest, if it can be done, this way servers with dozens of players will run efficiently and will not experience problems with lag spikes nearly as often unless their internet or network connection is too slow or has high latency, although it's doubtful latency issues will be experienced on a private home network through an ethernet connection in a LAN party.
The limits of single threaded performance is the entire reason why the issue of multithreading gets brought up.
Obviously you don't need multithreading for things like music players, photo viewing or word documents etc, but some applications are too advanced for even a dual core CPU to safely run on. Minecraft can run on dual core CPU's, but you may experience problems with it unless a separate system is handling the dedicated server application, it's not recommended to run the server app on the same system you're gaming on.
No, I've said that this is already the case ever since 1.8 - not "could" but "does", and they appear to use many separate threads (as I pointed out before, 1.13+ may use upwards of 20 separate threads for world generation, which would be one thread for each chunk; obviously, this would not be the case for mobs as having hundreds of threads would do no good; even 20 is excessive unless you have Ryzen Threadripper or similar):
But somehow it doesn't seem to help because of Mojang's atrocious coding practices, as I've tried explaining countless times, along with examples of how my own modded version runs literally dozens of times faster despite not using multithreading (beyond a server and client thread, plus file I/O and GC). Here is yet another example, from back when I had my old computer; 1.6.4 (this was long before any of my large-scale refactorings so this was pretty much vanilla performance) had a tick time of 8 ms while 1.8 was around 70 ms with constant server lag as result, even at a low render distance and not moving round - that's how much more resource-intensive the game became over just two updates:
This comment on a thread about lag in 1.8 pretty much sums it up; Mojang seem to be using industry-standard/taught programming practices, not the ones you need for a real-time game:
This comment by the creator of Optifine applies to the game in general:
Also, most of the size increase of the Minecraft jar since 1.8 is due to such coding practices, not actual new content; all my own content adds up to a fraction of the size of the equivalent number of features added since 1.6.4 (interestingly, the size of the Minecraft jar decreased between 1.5 and 1.6; in fact, 1.7.10 is still smaller than 1.5.2 despite having more content). Bigger code is by itself not necessarily bad but it is indicative of increased complexity to do the same things, as I demonstrate in this post, and puts more pressure on memory bandwidth and the CPU cache (effective use of the latter is extremely important due to how much faster CPUs have become compared to system memory, which can take 100 or more cycles to access; likewise, 1.8+'s memory allocation rate has to be bad for caching and system memory bandwidth).
TheMasterCaver's First World - possibly the most caved-out world in Minecraft history - includes world download.
TheMasterCaver's World - my own version of Minecraft largely based on my views of how the game should have evolved since 1.6.4.
Why do I still play in 1.6.4?
java 1.18 runs on as many cores as your pc has available -1 it can be set to use less in the text config file. change log states:
change log: https://minecraft.fandom.com/wiki/Java_Edition_1.18
Performance
It would require an entire rewrite of the code. All mods/plugins would stop working because the core infrastructure would of changed. They're kind of stuck with it to my knowledge. However nothing stops you from using "Minestom" which allows you to code Minecraft from scratch, thus allowing you to make a server multi-threaded. Also, I believe Mojang/Microsoft doesn't really care for the Minecraft Java community as much as it does for Bedrock. Not sure if Bedrock is multi-threaded either.
Correct me if I'm wrong but it is my basic understanding.
But the game not being optimized with multithreading probably has something to do with why you can get massive slow downs with garbage collection, because the more data the processor has to work with it makes sense that it would be harder for it to do. When playing around with Java edition I just leave it with the default setting of 2gb allocated, because allocating more doesn't seem to help.
Because I don't like the ore generation in Caves and Cliffs update, I may have to go back to Java edition for my survival mode playthroughs again, and no longer play on bedrock edition. This is because bedrock edition doesn't allow you to lock it to an older version, for example Nether update 1.16.
This also means I am going to lose the benefits of bedrock edition, and to get it to run smoothly I'd have to improvise and run it with 16 chunks render distance instead of 32. It's possible Microsoft may kill off Java edition someday, but it's more likely that they're just going to make bedrock edition even worse with their updates than that happening, so I'm taking a calculated risk here. My reasoning is if their intention was to end Java edition they wouldn't have bothered with the account migration, but it's clear for the time being they intend on keeping Java edition going.
Not like this hasn't happened several times before - wonder why so many mods never updated past 1.7.10? Mojang completely rewrote the rendering engine, supposedly to make it better (the only real benefit is it is easier to add new block models; previously, as I do myself, you had to hardcode them in, though most blocks used the same "renderCuboidBlock" method with different dimensions specified it wasn't as complex as you might think, e.g. fences render 1-5 cuboid shapes by calling the method once for each piece). A similar thing happened after 1.12.2 due to world generation being rewritten, then another rewrite of the rendering engine occurred in 1.15, then 1.17, and so on - most of the code is completely different from what it was prior to 1.8, or better yet, 1.7 (1.7 started the transition away from using numerical IDs; instead of code like "if (id == Block.stone.blockID)" you now had "if (block == Block.stone)". For comparison, my own code looks like "if (id == BlockStates.stone)", where "BlockStates.stone" is a static final int constant set to 1, the numerical ID for stone, and is inlined at compile time to "if (id == 1)", eliminating the need to reference an object; likewise, "BlockStates.stone_granite" has the value of 257, or 1 + 1 * 256; note that 1.8+ uses "block state" objects but they are only similar in name).
Also, the Wiki is far behind if it claims that only since 1.18 is world generation multithreaded, because it has been since 1.13, as evidenced by thread dumps from crash reports; oddly, the page for 1.13 makes no mention of it, and otherwise the only mentions I can find are on various forums, such as this page; "Worldgen is now its own separate thread(s)."; and my own replies on these forums (this isn't the first time I've seen the Wiki make claims about a "new" feature which is actually older).
TheMasterCaver's First World - possibly the most caved-out world in Minecraft history - includes world download.
TheMasterCaver's World - my own version of Minecraft largely based on my views of how the game should have evolved since 1.6.4.
Why do I still play in 1.6.4?
Yet for some reason the game still runs like crap without Optifine, as shown in AntVenom's own tests.
It's evident to all of us that Mojang's coding practices are bad, and for a game that supposedly uses multithreading it sure doesn't make good use of those extra CPU cores. Without Optifine you're often forced to use low render distances to compensate for Java edition's terrible rendering performance and a chunk loading system that isn't very efficient which is why constant lag spikes are a problem for many people. You agreed that 30fps is unplayable, but that's exactly what ends up happening on my machine with 32 chunk render distance despite having a machine that exceeds the recommended requirements officially put out by Mojang. In bedrock edition neither me nor my friends have this problem.
Optifine isn't even that good compared to real optimization mods, like Sodium, which claims performance increases of up to 10-fold; the example screenshot shows FPS increasing from 35 to 478 at a render distance of 32 chunks on a system that is close to a decade old, with a CPU that had relatively poor performance in the day (this was back when AMD CPUs were inferior to Intel, particularly the FX series they used):
They even included some of the fixes that I've added to my own mods:
Oddly enough, Bedrock appears to have had smooth lighting on water for a long time, yet they haven't added it to Java yet; as with many other things I have no idea why they haven't at least fixed the bugs (the underwater one was fixed in 1.17), despite bug reports spanning back nearly a decade (this one is nearly 8 years old, and as mentioned here every version with smooth lighting is broken) - with code examples of how to fix it (even if they have changed the rendering code since 1.7 surely it is similar enough that the same incorrect offsets are being used).
Another mod claims to achieve significantly greater performance than Bedrock, allowing for render distances of up to 256 chunks (I assume, they didn't give units but surely it isn't in blocks); both of these make any "optimizations" that I've made look like nothing at all - I don't even support a render distance higher than 16 chunks and steady-state FPS isn't much higher than in vanilla, with improvements to FPS and lag spikes during chunk updates being the main difference due to improved update scheduling; likewise, back when I used Optifine it had little impact on steady-state FPS on several different computers (so it couldn't have just been a particular setup; I believe Optifine's claim of "2x FPS comes from back when it added OpenGL occlusion culling, which did double my FPS, but it has been included in vanilla for a long time and Optifine only makes some minor improvements to how it works); by contrast, these mods significantly increase FPS and use modern OpenGL, not fixed-function stuff from the late 1990s which has to be emulated on modern systems (yes, 1.6.4 only needs OpenGL 1.2, released in 1998 and deprecated since 2008; I have not changed this):
Note that an Intel HD 2000 can't even run the latest version (1.17 requires OpenGL 3.2, which is only supported by HD 4000 and later, and the GTX 780 is over 8 years old.
TheMasterCaver's First World - possibly the most caved-out world in Minecraft history - includes world download.
TheMasterCaver's World - my own version of Minecraft largely based on my views of how the game should have evolved since 1.6.4.
Why do I still play in 1.6.4?
I think at some point I'm going to switch to TMC's mod and only play on that. 1.18 might well be the version to make me do that with the ore rarity.
It's pretty clear Mojang just follows the buy-new path on required specs for the game. "What, you can't afford a new computer? All true players do that" (snobbishness intensifies)
This is exactly why, and also why they didn't release Caves and Cliffs as a single update that would've been fully released around this time. But even in splitting the update, people still aren't happy (I still see complaints everywhere that "1.18 adds nothing," and it's being treated as a complaint). Although Mojang's practices here are definitely terrible (the game has run worse with every update), there is definitely something to be said about its fanbase. Mojang has conditioned its players to expect regular huge content updates every 6-9 months, and by doing so it is immediately considered disappointing if they were to put out an "optimization update", for example (like 1.15). It's frustrating because they're ultimately going to do what they think the fanbase wants (though maybe not perfectly), because they're a business after all; yet this negatively impacts the game which just gets more bloated with each update.
It would be great if they spent an entire update focused on optimization and refactoring, not on "new stuff." But they're not going to do that because in their minds, it will be disappointing to players who consistently expect bigger and better every single time. This stems from the culture of instant gratification that has consumed most facets of modern society, such that we (speaking generally) expect the latest and greatest in the shortest amount of time, and the idea of patience and waiting causes us to completely lose interest (my laptop which I've only had for four years is considered old now, the same way old iPhone versions are considered obsolete immediately when a new one comes out, which is just absurd to me).
I think Mojang fears that should they do an optimization update, this is exactly what would happen - people would suddenly get bored, because that's what happens every time not even a month after a new update drops. Not even a day will go by after November 30th, and people will start talking about 1.19, which will put immense pressure on Mojang to churn out snapshots before people get bored. It's really unfortunate, but also Mojang conditioned players to be this way especially with the new tradition of announcing future updates at Minecon Live before the current one is even released, with no discussions on optimization at all. What I want to see is the fanbase getting on the same page about what's necessary in terms of optimization and actually improving what is already here, but that's never going to happen (I would love to be wrong though).
LP series? Not my style! Video series? Closer, but not quite. Survival journal, maybe? That's better. Now in Season 4 of the Legends of Quintropolis Journal (<< click to view)!! World download and more can be found there.
And so they can get the most use out of the computers they have now before they do upgrade. I don't like it when updates are rushed just for the sake of pleasing fans. I'd much rather optimizations be done before the update was released so people didn't end up with as many lag spikes. Lag can spoil a gamers fun just as much as a game that is repetitive or boring, and because of all the updates they're rushing out of the door with newer content, they're not patching existing problems that exist with the game like the broken world generation in bedrock edition where if you go millions of blocks out from the center in Overworld, the farlands haven't been patched to function more like the regular Overworld terrain and physics, so players fall right through them and die in the Void, if this were patched and an invisible barrier were placed at 30 million X to Z, bedrock edition worlds would function more like Java ones.