I recently made some benchmarks with a pregenerated world (initial 1.6), that i performed equal actions in on both versions. Since i am making 1.7.10 mods and play many 1.7.10 packs, im quite interested in good performance here. While 1.7.10 is known to be the worst update for multicore computers, since it is pretty heavy but not multithreaded, i was still suprised by the results, since i cannot see much of a difference that would explain the behavior.
The first thing i noticed is that i have somehow more stable fps without optifine in 1.6.4 (why?) but less hardware utilisation with optifine.
The test was performed with latest optifine 1.6.4, latest forge 1.6.4 versus Fastcraft 1.25 and latest Optifine 1.7.10, latest forge 1.7.10. The settings were as equal as possinble and both the same world, converted and generated in 1.6.4.
My results are, that i have generally little more fps in 1.7.10, if not moving at all. Even it was the same world It could be related to mobs despawning or moving outside of the frustum, but i dont know, unlikely. Point for 1.7.10 here, even 1.6 feels more stable in fps.
While loading chunks though, the difference is pretty clear: 1.6.4 loads the chunks very fast (about 5 times faster) without any missing chunks. It has also show 10 times more chunkupdates than 1.7.10. While i notice barely a loss of fps from chunkload, in 1.7.10 the fps is 3 times lower while having much less chunk updates and chunk loads. And it does also cause stutter.
Im kind of suprised about this, since fastcraft should fix the known chunk lag, optifine has more performance features and there was much more time to develop since 1.7.10 stayed the major modding version for quite some time.
I extended this test to 1.12.2 forge latest and optifine 1.12.2 latest too. The performance is (suprisingly even with multithreading) worse than 1.7.10 in chunk loading and idle fps. But, most likely due to the opengl bug, or better interpolation, everything feels much smoother on 30 fps yet than 30 fps in previous versions (This is the first time i encountered this tho).
Also to note, which has always been like it: the cpu usage of 1.12 is much lower during idle than 1.7 and older. For example: 8chunks@30fps 100% cpu , but the same in 1.12.2 would be about 30% cpu only, while in legacy versions if you set fps higher it will still say 100% cpu... Thats confusing.
Also, versions past 1.6 until now have the bug of transculent blocks that are layered behind each other, rendering in wrong order in some angles, causing the famious ice render glitch. This bug is not present in 1.6, ice would just render as if there is air and no water below the ice.
Except that and different birch color foliage/water color, i see no difference in 1.6.4 and 1.7.10, and none of this would explain a performance difference i believe.
Code Investigation:
I noticed that since 1.7.10 they use Block Objects instead of byte or int. I dont know why but in most cases it looks like a waste to lookup and search the block registry every time a block is being handled, for example during chunk render. Another thing i noticed is that 1.6 4 use minecraft obfuscation names while 1.7.10 use srg names at runtime, but this shouldnt have a big effect.
Looks like in pre 1.7 versions there was a block id config and ids were handled by the player too, the automatic id assignment probably is a good idea tho.
If im right optifine just fixes high gpu utilisation and fastcraft the lighting updates at chunkgen and clientside chunkrender. I might be wrong (or missing something)
I cant tell a true reason why its so much slower, its like 10 times slower chunk loading if i combine the slow chunk loading, the slow chunkupdates and the low fps.
I recently made some benchmarks with a pregenerated world (initial 1.6), that i performed equal actions in on both versions. Since i am making 1.7.10 mods and play many 1.7.10 packs, im quite interested in good performance here. While 1.7.10 is known to be the worst update for multicore computers, since it is pretty heavy but not multithreaded, i was still suprised by the results, since i cannot see much of a difference that would explain the behavior.
The first thing i noticed is that i have somehow more stable fps without optifine in 1.6.4 (why?) but less hardware utilisation with optifine.
The test was performed with latest optifine 1.6.4, latest forge 1.6.4 versus Fastcraft 1.25 and latest Optifine 1.7.10, latest forge 1.7.10. The settings were as equal as possinble and both the same world, converted and generated in 1.6.4.
My results are, that i have generally little more fps in 1.7.10, if not moving at all. Even it was the same world It could be related to mobs despawning or moving outside of the frustum, but i dont know, unlikely. Point for 1.7.10 here, even 1.6 feels more stable in fps.
While loading chunks though, the difference is pretty clear: 1.6.4 loads the chunks very fast (about 5 times faster) without any missing chunks. It has also show 10 times more chunkupdates than 1.7.10. While i notice barely a loss of fps from chunkload, in 1.7.10 the fps is 3 times lower while having much less chunk updates and chunk loads. And it does also cause stutter.
Im kind of suprised about this, since fastcraft should fix the known chunk lag, optifine has more performance features and there was much more time to develop since 1.7.10 stayed the major modding version for quite some time.
I extended this test to 1.12.2 forge latest and optifine 1.12.2 latest too. The performance is (suprisingly even with multithreading) worse than 1.7.10 in chunk loading and idle fps. But, most likely due to the opengl bug, or better interpolation, everything feels much smoother on 30 fps yet than 30 fps in previous versions (This is the first time i encountered this tho).
Also to note, which has always been like it: the cpu usage of 1.12 is much lower during idle than 1.7 and older. For example: 8chunks@30fps 100% cpu , but the same in 1.12.2 would be about 30% cpu only, while in legacy versions if you set fps higher it will still say 100% cpu... Thats confusing.
Also, versions past 1.6 until now have the bug of transculent blocks that are layered behind each other, rendering in wrong order in some angles, causing the famious ice render glitch. This bug is not present in 1.6, ice would just render as if there is air and no water below the ice.
Except that and different birch color foliage/water color, i see no difference in 1.6.4 and 1.7.10, and none of this would explain a performance difference i believe.
Code Investigation:
I noticed that since 1.7.10 they use Block Objects instead of byte or int. I dont know why but in most cases it looks like a waste to lookup and search the block registry every time a block is being handled, for example during chunk render. Another thing i noticed is that 1.6 4 use minecraft obfuscation names while 1.7.10 use srg names at runtime, but this shouldnt have a big effect.
Looks like in pre 1.7 versions there was a block id config and ids were handled by the player too, the automatic id assignment probably is a good idea tho.
If im right optifine just fixes high gpu utilisation and fastcraft the lighting updates at chunkgen and clientside chunkrender. I might be wrong (or missing something)
I cant tell a true reason why its so much slower, its like 10 times slower chunk loading if i combine the slow chunk loading, the slow chunkupdates and the low fps.
----------------------------------------------------------------------------
I would like to note that this post is completely unrelated to the performance issues and lag of my other thread:
1.7.10 had always been an issue, but it was still kimd of playable for me, just not as enjoying as 1.6.4.
No complaints or anything, still love 1.7.10, technical discussion and investigation.
I also wanted to get into 1.6.4 modding, but cant find a working gradle for it
Minecraft 1.6.4 Performance Comparison