Why? Overwacht. Modern game. . .Run fast on low graph PC's. Obviously not on 60FPS, but 30 or 25 are aceptable. MC for windows 10 runs better only because it's made in a graphic engine.
25-30 FPS is in no way acceptable to me, which is what killed the game (newer versions anyway) for me when 1.8 came out, and even in 1.7 (as well as all newer versions, until I got a new computer last year) I had some weird issue where it hitched every 10th frame (always every 10th frame regardless of any settings, even the FPS limit and even running external programs to cause lag), causing the effective FPS to be that much lower (here is an old screenshot with Optifine's lagometer enabled; it may say 58 FPS but it looked like 1/10th of that).
Also, most games are in no way comparable to Minecraft - does Overwatch do anything underneath all those fancy textures? Does it have to render millions of blocks (or run culling code so as to not crash your computer by trying to render every single face)? No? For some idea of just what Minecraft has to do in order to render just 10 chunk render distance, the minimum required due to a bug with mob (de)spawning, there are 441 (21x21) chunks loaded, each with as many as 65536 blocks (16128 for a world height of 63; e.g. open ocean) and 393216 (96768) faces. If the game tried to render every face of every block the GPU would have to render as many as 173 (42.7) million faces - even with 16x16 textures that's 44.4 billion texels (a texel being a pixel of the texture, each having 256 pixels/texels) - a GeForce GTX 1080 (a very high-end GPU) can process 257 billion per second so that's a mere 5.8 FPS.
This can be reduced to about 1292 (704) per chunk by removing all faces which are occluded by another block, leaving only 256 on the tops and bottom of each chunk plus the sides of the outermost chunks, for a reduction of up to 300-fold, but that requires checking every single block to see if it is occluded. Even a chunk with terrain, trees, caves, structures, etc, is going to have most of the block faces occluded and the game performs an additional culling step to cull 16x16x16 chunk sections which are not visible but that requires even more checks to determine this - and all of that extra checking is dumped onto the CPU, partly explaining why the game is so reliant on a good CPU.
Then add in all the block updates, entity movement, and so on which occur constantly. Bedrock actually only updates chunks within 4 chunks of the player (81 chunks) while Java updates all chunks within render distance (i.e. 441 chunks for 10 chunk render distance*. Entities are only active if a 5x5 chunk area around them is loaded so they are processed within 289 chunks):
But Java actually has things going on in all of those chunks, Bedrock just SHOWs you the chunks, if you are more than 4 chunks away everything deactivates.
*In 1.3.1-1.6.4 the internal server used a fixed view distance of 10 but a chunk update radius of only 7 so only about half the loaded chunks (225/441) were being updated, which may explain why some people saw worse performance after 1.8(?) made the chunk update radius equal to the view (and since 1.7.4 render) distance. Also, I've noticed that in newer versions the area of chunks the player generates is not a perfect square but often has random single chunks along the sides, suggesting that blocks that update next to unloaded chunks force them to load - the chunk update radius should be at least 1 less than the view distance to avoid this (similar to how the entity update radius is 2 less).
Also, it is a bit misleading to say that Java Minecraft does not have its own game engine - all games run on a engine of some sort, whether it is based on a customized version of a "standard" engine or a homemade one (as in the case of Java; Bedrock probably uses its own as well). However, Java does depend on 3rd party libraries (LWJGL) to provide an interface to the GPU/OpenGL and this can be where some the issues lie; among other things, LWJGL 2 is ancient and has long been superseded by LWJGL 3; unfortunately they are so different (not even backwards compatible) that a huge rewrite is needed in order to convert over instead of simply updating the library).
I mean, I guess?
The game can basically run on any PC, so I don't see a real need to optimize any further.
Not according to Mojang:
CPU: Intel Core i5-4690 3.5GHz / AMD A10-7800 APU 3.5 GHz or equivalent
GPU: GeForce 700 Series or AMD Radeon Rx 200 Series (excluding integrated chipsets) with OpenGL 4.5
HDD: 4GB (SSD is recommended)
OS (recommended 64-bit):
- Windows: Windows 10
- macOS: macOS 10.12 Sierra
- Linux: Any modern distributions from 2014 onwards
This was recently updated so it may apply more to 1.13 (similarly, the last increase occurred before 1.8 was released - which was followed by lots of complaints of lag) - I imagine that the majority of players (including myself) do not have 3.5 GHz CPUs (the CPU is the most important factor in performance; of course, GHz does not mean everything but modern CPUs have mainly improved by adding more cores, which Minecraft still does not use effectively) - after only a year my computer is already becoming obsolete for playing Minecraft (it would be an understatement to say that my old one did not do so well on 1.8+, mainly because it had hardware from 2006, even 1.7 had some weird issue and this is one reason why I still play in 1.6.4 (due to the long time I was stuck with it, plus newer versions do not really appeal to me), which ran without any issues (aside from one caused by badly optimized pathfinding code, but that was easily fixable with a mod). Also, while Mojang says that only OpenGL 1.3 is needed (my old computer was 2.1) the game does use features from newer versions if they are available; for example, near the bottom of this crash report it says "Using framebuffer objects because OpenGL 3.0 is supported and separate blending is supported.".
Much of this is due to terrible coding; many blame Notch but newer developers are just as, if not even more, to blame; for example:
Note that while everybody also blames Java for being a terrible programming language that is actually not true:
People say this a lot, but it's not really accurate in any significant sense. See here for an in-depth look but the tl;dr: 20 years ago java was slower, today they're effectively equal.
The C++ version is faster because it was written from the ground up with speed in mind by a team of professional programmers answerable to Microsoft, while the java version was cobbled together by one guy in a garage (figuratively), and maintained by a very small team much more interested in feature creep than performance passes.
Actually, you can blame Bedrock for some of the issues with Java - they apparently implemented BlockPos in BE first then transferred it over to JE without realizing the consequences since Java does not optimize it away like C++ does (this does not make Java worse, you just need to optimize it away beforehand by not using it in the first place, I've never understood why they think that "pos.x, pos.y, pos.z" is better than simply "x, y, z" anyway, which is what C++ optimizes it to. Worse, if you want to change the values in a BlockPos you need to create an entirely new object, not just add/multiply/etc):
We fortunately don't use Java and so we can create as many BlockPos we want. Last time I checked we create around 400k every time a chunk is tessellated, but they're inlined and ultimately deconstructed into 3 ints and it doesn't even make a dent in the profiling