As promised before, here's results with my laptop. I also have another desktop, although I'm not sure I'll do that one.
Windows 7 (64-bit)
Java 7 update 51 (64-bit)
Core i3 4010U (1.7GHz)
Intel HD 4400
6GB RAM (1GB RAM allocated)
1366 x 768 resolution
I forgot to grab a picture of the options screen this time, but as before, it's with everything maximum. I probably should have elected for lower settings since these are too high for this laptop but oh well; as long as they are the same among each other.
14w29b w/ VBO On - ~15-20 FPS
14w29b w/ VBO Off - ~7-9 FPS
1.7.10 - ~20-25 FPS
As with on my desktop, 1.7.10 performs better for me, although unlike on my desktop, VBO improved performance here, which also seems backwards if a weaker CPU and better GPU would lead to an increase (here I have a better CPU and a very low-end GPU). It wasn't by much, but frame rates were very low in any case to begin with (because I used maximum settings and a 16 chunk render distance, which I usually wouldn't play with on this hardware), so percentage-wise, it was a fair increase.
Last time I did this on my desktop, I never tried placing or destroying any blocks. I did this time, and also experienced the delay others have mentioned (with VBO off anyway; I didn't try with it on). I placed and then destroyed a block and was then able to walk through it. I didn't disappear visually until I paused the game probably about five to ten seconds later.
RAM (and RAM allocated specifically to Minecraft): 1G
A screenshot of your Minecraft Settings:
FPS when you have VBO turned off and turned on (found in Video Settings), as well as your FPS in 1.7.10 (get all of these in the same world in the same location, wait in one place for 2-3 minutes without moving for consistency)
VBO OnVBO Off1.7.10
A comparison of how fast chunks load when flying in one direction in Creative Mode between this snapshot and 1.7.10 (again, use the same world): Chunks load notably faster in snapshot... If they even do (read below).
Any issues or bugs you see with the new settings: VBO seems like simply a overcompensation of "Hey look we did something good" for a update that clearly broke the performance of the game thus far. VBO On is extremely comparable to what we already had in the past versions (and 1.7.10), being in fact a tiny bit worse. (The fps was constantly and notably higher on 1.7.10). Additionally, in the snapshot version the chunks, and otherwise block changes are sometimes permanently stuck, in the same world, even if I restart the whole game blocks will remain in their same state at times, otherwise, while the game goes fine breaking a block will have it stuck and placing blocks will make them invisible for a indeterminable period of time. I think this update is in very bad place right now.
There wasn't much of a difference between VBO on/off with intel's integrated video card although it seemed that the chunks loaded a little faster with it off. Compared to 1.7.10 chunk loading was extemely faster. The screenshot from 1.7.10 was 5 minutes after loading, while from the snapshot, only a minute or 2. While sprinting and flying I could almost catchup to the loading chunks in 1.7.10. In the snapshot they loaded as fast as I moved. One thing I did notice was that if switching VBO on (and possibly off) while in game, torches and other chunks would start to flicker and I would eventually crash to the launcher with Java throwing some exception error. Switching VBO on/off on the main menu works ok but the effects weren't noticable. The FPS varied 0-5 fps in all 3 setups. The only noticable difference was that in 1.7.10 I would achieve 30-40 fps before the chunks were fully loaded then would drop to about 20 after a few minutes. When moving fast, FPS would drop to single digits and occassionally freeze. With the snapshot, the FPS were more consistant (between 10-20fps) whether moving or sitting still.
Edit: the 2nd pic is of the snapshot test. It's the origin chunk. I've made several worlds up and it seems that the origin chunk is never rendered until I break a block/grass or something to force it to update. The fps was only a little different than looking ahead.
Windows 7 (64-bit)
Java Version 7 Update 65 (64-bit)
AMD AthlonII X4 640 @ 3GHz
AMD Radeon HD 5670 1GB
4GB RAM with 2GB allocated to Minecraft
Settings
VBO off
VBO on
Bonus x-ray vision which I get whether VBOs are on or off:
Clearly, VBOs being on helps a lot but neither of these screenshots show off the horrible block lag. Mining/placing anything requires a few seconds before it's updated. I get absolutely none of these problems in anything prior to the snapshots which is truly frustrating. I'm hoping that they'll correct what's going on but since I've been getting this odd lag for the past few snapshots, I'm not sure they will.
For comparison, here's a shot from 1.7.10. No x-ray vision, no block lag or any significant lag in general. All max settings.
It's really disappointing to see the snapshots have this kind of performance. I was looking forward to 1.8 but it looks like I won't be playing it at all if this isn't fixed. :/
Operating System: Windows 7 (amd64) version 6.1
Java Version: 1.7.0_45, Oracle Corporation
Memory: 393799448 bytes (375 MB) / 1024983040 bytes (977 MB) up to 1024983040 bytes (977 MB)
LWJGL: 2.9.1
OpenGL: AMD Radeon HD 6700 Series GL version 4.2.11931 Compatibility Profile Context, ATI Technologies Inc.
Settings:
VBO off:
VBO on:
1.7.10:
Even though the FPS took a 50% decrease, chunk rendering is through the roof. It takes all of five seconds for every chunk to load in 14w29b. In 1.7, even getting render distance 12 to load is something, much less 16. I usually play on 12 chunk render distance, so until now VBO has had no effect. Now I see that it actually drains performance on my machine (V-Sync does as well, taking up over 50% of processes).
Overall, chunk loading has improved maybe...1000%? It's much better than even Beta 1.7.3. I suspect that allocating more RAM to Minecraft may bring the FPS back up to previous levels, so overall very much improved.
edit: No, allocating more RAM does no such thing. I'll stick with 1G for 12-chunk render distance.
Overall, chunk loading has improved maybe...1000%? It's much better than even Beta 1.7.3. I suspect that allocating more RAM to Minecraft may bring the FPS back up to previous levels, so overall very much improved.
edit: No, allocating more RAM does no such thing. I'll stick with 1G for 12-chunk render distance.
Yeah, I'm not sure why everybody thinks the game needs gigs and gigs of RAM... the only cases where insufficient RAM causes lag is when the game is either actually running out of memory (severe lag followed by out of memory crash; the game rarely runs for a significant amount of time in a near-zero free memory, constantly garbage collecting state) or your computer itself doesn't have enough RAM, causing paging to disk. Incidentally, allocating more RAM does appear to increase the used memory but most of the increase is just due to the JVM allowing more garbage to collect; as noted on this Optifine post:
Minecraft has a memory usage pattern typical for a Java program. The used memory is slowly growing up until a limit is reached and then the garbage collector is invoked which frees all the memory which is not used. Then the cycle repeats again.
The lowest number to which the used memory falls is the real memory that the program needs, the rest is a buffer for the garbage collector so that it does not have to be invoked very often. Even this lower number is not the truth, because the garbage collector does not try very hard when there is enough memory and goes only for the easy targets in order not to use too much CPU time. It is important to notice that the garbage collector is usually not invoked before the limit is reached, even if there is huge amount of memory waiting to be freed...
...Quite often there is less than 1.5 GB physical memory free. In this case Java will happily allocate memory above the available physical memory and the operating system will have to swap parts of the used memory to disk. This process is slow as the disk is much slower (1000x) than physical memory.
In reality Minecraft needs no more than 256 MB to run, mostly using about 100-150 MB. This is for vanilla Minecraft running in 32 bit Java with no mods installed and using the default texture pack. Any memory above this will be used for garbage collector buffer and may cause memory to be swapped to disk and generate a lot of lag.
The last still appears to apply to even the latest snapshots, as seen on the screenshots I took; all the new features added over the years haven't significantly increased memory usage (which makes sense as most memory is used to hold chunk data; adding new blocks, entities, etc only increases the code component, which is the size of the jar file on disk, as long as mob caps, chunk sizes, etc remain the same).
Ironically, with the default 1 GB allocated I've gotten out of memory crashes when running on Far render distance (at least in 1.6 and earlier), which appears to be due to the JVM running out of process space (limited to 2 GB on 32 bit systems, hence the warning you get); note that the debug screen shows plenty of free memory before the crash, which shows the "Minecraft has run out of memory" screen (insufficient heap space on the other hand causes a hard crash with the error "java.lang.OutOfMemoryError: Java heap space" in the crash report). However, there are no problems when allocating 512 MB, either with in-game memory (heap space) or process memory (which is much larger as seen in Task Manager), unless I want to play around with TNT (note that when testing/comparing the snapshots I allocated 1 GB (-Xmx1024M) to see how much memory the game actually uses as I normally set -Xmx and -Xms to 512MB, resulting in 100% memory always allocated).
Yeah, I'm not sure why everybody thinks the game needs gigs and gigs of RAM... the only cases where insufficient RAM causes lag is when the game is either actually running out of memory (severe lag followed by out of memory crash; the game rarely runs for a significant amount of time in a near-zero free memory, constantly garbage collecting state) or your computer itself doesn't have enough RAM, causing paging to disk. Incidentally, allocating more RAM does appear to increase the used memory but most of the increase is just due to the JVM allowing more garbage to collect; as noted on this Optifine post:
Yeah, I experimented some, and I found that the ~667MB zone seems to be optimal. 256MB and sometimes even 512MB are not enough; the game regularly uses more than 500MB. I find that below the 2/3 GB range the game starts to run out of memory, especially on high settings. This also impacts FPS. Conversely, allocating more than 1GB does absolutely nothing, as the game only uses about a GB at a given time.
Yeah, I'm not sure why everybody thinks the game needs gigs and gigs of RAM... the only cases where insufficient RAM causes lag is when the game is either actually running out of memory (severe lag followed by out of memory crash; the game rarely runs for a significant amount of time in a near-zero free memory, constantly garbage collecting state) or your computer itself doesn't have enough RAM, causing paging to disk.
I allocated "gigs and gigs" because I had crashes until I did.
For the longest time, I ran with the default RAM allocation, and when I decided to try using OptiFine (starting with either 1.5.x or 1.6.x), I allocated 2GB for the sake of it. It was very rare, but I sometimes got out of memory crashes prior using the then far render distance, and now using a longer render distance (24 chunks at the time), I didn't want to bother running the chance.
When I moved to 1.7.4 and OptiFine (I now got better performance, allowing me to use 28 chunks), I accidentally forgot to set RAM allocation one time. I was greeted by occasional hitching soon after starting the world, and thinking it was just random, proceeded to do what I started playing to do, and that was open it to LAN for a few other players. I did so, and very shortly after they joined, it crashed. I tried again, and it did the same, and that time I watched RAM use, and it was topped off. That was when I noticed only 1GB was allocated.
I bumped it to 2GB, which fixed it, but on a play session on a later date, the occasional hitching returned, and RAM use was pretty high (a few hundred MB at most from 2GB). I didn't chance it. I closed the game, set it to 3GB, and it's been fine since, even if it usually uses between 1GB and 1.5GB most of the time. I only have this high RAM use with OpitFine (at least on this PC playing my primary world, but that's how I play almost 100% of the time), but since I am using it, and since I have the extra RAM, and since it's not noticeably hurting it by allocating more, and is only keeping crashes away... well, it makes sense for me to allocate those "gigs and gigs" in my case.
I bumped it to 2GB, which fixed it, but on a play session on a later date, the occasional hitching returned, and RAM use was pretty high (a few hundred MB at most from 2GB). I didn't chance it. I closed the game, set it to 3GB, and it's been fine since, even if it usually uses between 1GB and 1.5GB most of the time. I only have this high RAM use with OpitFine (at least on this PC playing my primary world, but that's how I play almost 100% of the time), but since I am using it, and since I have the extra RAM, and since it's not noticeably hurting it by allocating more, and is only keeping crashes away... well, it makes sense for me to allocate those "gigs and gigs" in my case.
This is very odd because when using Optifine on 1.6.4 I don't have any issues, with memory usage being around 200-300 MB, even after a few hours of playing, albeit only running on Normal render distance and default (16x) textures. This is also true of modded versions (nothing major though), except when I was playing with a mod I made that tripled the average ground depth (sea level at y=191 instead of y=63, this is far more terrain than Amplified generates) and even then it only used around 600 MB out of 1024 allocated (with 512 MB I got the "hitching" you described, which coincided with memory usage dropping down, hence garbage collection; having said that, I do get occasional lag spikes which appear to be garbage collection, but only a few times per hour).
That said, I have noticed a bug in Optifine, at least in the 1.6.4 version I use; when I die the MultiplayerChunkCache (chunks loaded) value doubles, from 441 to 882, then triples on another death, etc; not an issue for me but with regular deaths it could cause memory issues (going through a portal seems to reset it).
It is also worth noting that the memory usage described by Optifine applies to 32 bit systems; 64 bit may use more from what I've heard (although not necessarily; an "int" data type is still 32 bits on a 64 bit system, bytes 8 bits, and so on).
Well with 1.6.4 it was fine for me. It was 1.7.4 (I stayed away from 1.7.2) and all later 1.7.x versions that brought the relatively high memory use, but again, that's with OptiFine and an even further render distance. Vanilla runs fine with the default RAM allocation from what little I've tried of it.
Processor Model and Speed: Intel, i5 4670k OC @ 4.0ghz
Graphics Card Model: Nvidia, GTX 770 SC from EVGA
RAM (and RAM allocated specifically to Minecraft): 8GB, 4GB to minecraft
A screenshot of your Minecraft Settings
FPS when you have VBO turned off and turned on (found in Video Settings), as well as your FPS in 1.7.10 (get all of these in the same world in the same location, wait in one place for 2-3 minutes without moving for consistency)
As you can see, this was with the latest snapshot, 14w30b, as mojang claims has even further optimizations than the previous snapshot. I don't quite remember when 32 chunk render distance was added, but I tested it just to put max stress on my PC. VBO didn't help at all, it actually lowered my fps by a slim margin. However chunkloading is much smoother and gives a better expeirence, but I don't see any performance gain from these "optimizations"... which seems to be a trend from what I can see
Processor Model and Speed: Intel, i5 4670k OC @ 4.0ghz
Graphics Card Model: Nvidia, GTX 770 SC from EVGA
RAM (and RAM allocated specifically to Minecraft): 8GB, 4GB to minecraft
A screenshot of your Minecraft Settings
FPS when you have VBO turned off and turned on (found in Video Settings), as well as your FPS in 1.7.10 (get all of these in the same world in the same location, wait in one place for 2-3 minutes without moving for consistency)
As you can see, this was with the latest snapshot, 14w30b, as mojang claims has even further optimizations than the previous snapshot. I don't quite remember when 32 chunk render distance was added, but I tested it just to put max stress on my PC. VBO didn't help at all, it actually lowered my fps by a slim margin. However chunkloading is much smoother and gives a better expeirence, but I don't see any performance gain from these "optimizations"... which seems to be a trend from what I can see
Why in the world would anyone need 32 chunk render distance (which hasn't been added by the way, unless it's another intel-only thing).
Also in 14w30b...I can run 16 chunk render distance again!! Max settings, excepting VBOs and v-sync.
Going to chime in and say that all of my issues have been 100% fixed with the latest snapshot. I've never had this kind of performance with Minecraft so kudos to the crew for getting it working.
So there's the option of using further render distances now, and they're also trying out faster chunk loading? Well, there goes the only two real reasons I use OptiFine (well, there's sort of a third being it's fancy fog option, and the rest like custom font, custom colors, and the better sky I can achieve more or less with MCPatcher). OptiFine's render distances beyond 16 is inconsistent for me ever since 1.7, so if this new snapshot has good performance for me (unlike 14w29b did), maybe I'll forgo using OptiFine with 1.8.
Yeah, I just found out after posting. Although why anyone would want to use it is a mystery to me.
Why not? Why use a 16 chunk render distance over an 8 chunk render distance? If you can afford it performance-wise, why is having the option such a bad thing?
I've been using increased render render distances for a while now (I currently use 28 chunks with OptiFine), and anything under the default maximum of 16 chunks just seems more and more claustrophobic to me. It really opens up the world, so to speak, visually.
I just tried 14w30c, and I have mixed feelings. They seemingly removed OpenGL. I don't mind this in and of itself, but (I'm guessing as a result largely of that) my performance is noticeably worse. I guess VBO is an intended replacement, but it doesn't work nearly as well for me as the old Advanced OpenGL option did (not in general, anyway; looking at the sky or ground it gives better frame rates, sure, but in actual play, frame rate/smoothness is down).
I know it's a bit of a Apples to Oranges comparison since you need OptiFine with 1.7 and prior for render distances above 16, but with the same render distance of 28, 1.7.10 performs nicely enough for me, whereas 14w30c seemingly spends much of it's time with frame rates in the 30s or 40s, which drags the general performance/experience down.
I know that might sound like a silly case of first world problems complaining about such render distances and all, but for a performance improving release, I'm left wondering not only where it is, but why it's arguably worse.
*sigh*
The good news is, the chunks load really fast, and at least so far, seemingly more reliably (I'll have to test more, but with OptiFine I often got patches of chunks missing in seldom visited areas, or any area on initial load when open to LAN).
Also, the issue with 1.7.2 to 1.7.10 with the horizon darkening (which does the opposite and brightens with higher render distances using OptiFine) appears to be gone, at least with higher render distances.
Windows 7 (64-bit)
Java 7 update 51 (64-bit)
Core i3 4010U (1.7GHz)
Intel HD 4400
6GB RAM (1GB RAM allocated)
1366 x 768 resolution
I forgot to grab a picture of the options screen this time, but as before, it's with everything maximum. I probably should have elected for lower settings since these are too high for this laptop but oh well; as long as they are the same among each other.
14w29b w/ VBO On - ~15-20 FPS
14w29b w/ VBO Off - ~7-9 FPS
1.7.10 - ~20-25 FPS
As with on my desktop, 1.7.10 performs better for me, although unlike on my desktop, VBO improved performance here, which also seems backwards if a weaker CPU and better GPU would lead to an increase (here I have a better CPU and a very low-end GPU). It wasn't by much, but frame rates were very low in any case to begin with (because I used maximum settings and a 16 chunk render distance, which I usually wouldn't play with on this hardware), so percentage-wise, it was a fair increase.
Last time I did this on my desktop, I never tried placing or destroying any blocks. I did this time, and also experienced the delay others have mentioned (with VBO off anyway; I didn't try with it on). I placed and then destroyed a block and was then able to walk through it. I didn't disappear visually until I paused the game probably about five to ten seconds later.
4 GB RAM
Intel Core i7-3520M @ 2.90 GHz 2.90 GHz
Intel HD 4000
Soft fail at New World Loaded (all configurations) I can Confirm VBO to be completely incompatible with this machine.
Time for me to get a new rig I suppose.
Windows 8.1
Java Version 8 Update 11
CPU: Intel I5-3210M 2.5ghz
Video: Intel HD4000
6GB Ram with 1GB for minecraft
Resolution 1680x1050 (playing on a 2nd monitor)
There wasn't much of a difference between VBO on/off with intel's integrated video card although it seemed that the chunks loaded a little faster with it off. Compared to 1.7.10 chunk loading was extemely faster. The screenshot from 1.7.10 was 5 minutes after loading, while from the snapshot, only a minute or 2. While sprinting and flying I could almost catchup to the loading chunks in 1.7.10. In the snapshot they loaded as fast as I moved. One thing I did notice was that if switching VBO on (and possibly off) while in game, torches and other chunks would start to flicker and I would eventually crash to the launcher with Java throwing some exception error. Switching VBO on/off on the main menu works ok but the effects weren't noticable. The FPS varied 0-5 fps in all 3 setups. The only noticable difference was that in 1.7.10 I would achieve 30-40 fps before the chunks were fully loaded then would drop to about 20 after a few minutes. When moving fast, FPS would drop to single digits and occassionally freeze. With the snapshot, the FPS were more consistant (between 10-20fps) whether moving or sitting still.
Edit: the 2nd pic is of the snapshot test. It's the origin chunk. I've made several worlds up and it seems that the origin chunk is never rendered until I break a block/grass or something to force it to update. The fps was only a little different than looking ahead.
Java Version 7 Update 65 (64-bit)
AMD AthlonII X4 640 @ 3GHz
AMD Radeon HD 5670 1GB
4GB RAM with 2GB allocated to Minecraft
Settings
VBO on
Bonus x-ray vision which I get whether VBOs are on or off:
For comparison, here's a shot from 1.7.10. No x-ray vision, no block lag or any significant lag in general. All max settings.
Settings:
VBO on:
1.7.10:
Even though the FPS took a 50% decrease, chunk rendering is through the roof. It takes all of five seconds for every chunk to load in 14w29b. In 1.7, even getting render distance 12 to load is something, much less 16. I usually play on 12 chunk render distance, so until now VBO has had no effect. Now I see that it actually drains performance on my machine (V-Sync does as well, taking up over 50% of processes).
Overall, chunk loading has improved maybe...1000%? It's much better than even Beta 1.7.3. I suspect that allocating more RAM to Minecraft may bring the FPS back up to previous levels, so overall very much improved.
edit: No, allocating more RAM does no such thing. I'll stick with 1G for 12-chunk render distance.
Putting the CENDENT back in transcendent!
Yeah, I'm not sure why everybody thinks the game needs gigs and gigs of RAM... the only cases where insufficient RAM causes lag is when the game is either actually running out of memory (severe lag followed by out of memory crash; the game rarely runs for a significant amount of time in a near-zero free memory, constantly garbage collecting state) or your computer itself doesn't have enough RAM, causing paging to disk. Incidentally, allocating more RAM does appear to increase the used memory but most of the increase is just due to the JVM allowing more garbage to collect; as noted on this Optifine post:
The last still appears to apply to even the latest snapshots, as seen on the screenshots I took; all the new features added over the years haven't significantly increased memory usage (which makes sense as most memory is used to hold chunk data; adding new blocks, entities, etc only increases the code component, which is the size of the jar file on disk, as long as mob caps, chunk sizes, etc remain the same).
Ironically, with the default 1 GB allocated I've gotten out of memory crashes when running on Far render distance (at least in 1.6 and earlier), which appears to be due to the JVM running out of process space (limited to 2 GB on 32 bit systems, hence the warning you get); note that the debug screen shows plenty of free memory before the crash, which shows the "Minecraft has run out of memory" screen (insufficient heap space on the other hand causes a hard crash with the error "java.lang.OutOfMemoryError: Java heap space" in the crash report). However, there are no problems when allocating 512 MB, either with in-game memory (heap space) or process memory (which is much larger as seen in Task Manager), unless I want to play around with TNT (note that when testing/comparing the snapshots I allocated 1 GB (-Xmx1024M) to see how much memory the game actually uses as I normally set -Xmx and -Xms to 512MB, resulting in 100% memory always allocated).
TheMasterCaver's First World - possibly the most caved-out world in Minecraft history - includes world download.
TheMasterCaver's World - my own version of Minecraft largely based on my views of how the game should have evolved since 1.6.4.
Why do I still play in 1.6.4?
Yeah, I experimented some, and I found that the ~667MB zone seems to be optimal. 256MB and sometimes even 512MB are not enough; the game regularly uses more than 500MB. I find that below the 2/3 GB range the game starts to run out of memory, especially on high settings. This also impacts FPS. Conversely, allocating more than 1GB does absolutely nothing, as the game only uses about a GB at a given time.
Putting the CENDENT back in transcendent!
I allocated "gigs and gigs" because I had crashes until I did.
For the longest time, I ran with the default RAM allocation, and when I decided to try using OptiFine (starting with either 1.5.x or 1.6.x), I allocated 2GB for the sake of it. It was very rare, but I sometimes got out of memory crashes prior using the then far render distance, and now using a longer render distance (24 chunks at the time), I didn't want to bother running the chance.
When I moved to 1.7.4 and OptiFine (I now got better performance, allowing me to use 28 chunks), I accidentally forgot to set RAM allocation one time. I was greeted by occasional hitching soon after starting the world, and thinking it was just random, proceeded to do what I started playing to do, and that was open it to LAN for a few other players. I did so, and very shortly after they joined, it crashed. I tried again, and it did the same, and that time I watched RAM use, and it was topped off. That was when I noticed only 1GB was allocated.
I bumped it to 2GB, which fixed it, but on a play session on a later date, the occasional hitching returned, and RAM use was pretty high (a few hundred MB at most from 2GB). I didn't chance it. I closed the game, set it to 3GB, and it's been fine since, even if it usually uses between 1GB and 1.5GB most of the time. I only have this high RAM use with OpitFine (at least on this PC playing my primary world, but that's how I play almost 100% of the time), but since I am using it, and since I have the extra RAM, and since it's not noticeably hurting it by allocating more, and is only keeping crashes away... well, it makes sense for me to allocate those "gigs and gigs" in my case.
This is very odd because when using Optifine on 1.6.4 I don't have any issues, with memory usage being around 200-300 MB, even after a few hours of playing, albeit only running on Normal render distance and default (16x) textures. This is also true of modded versions (nothing major though), except when I was playing with a mod I made that tripled the average ground depth (sea level at y=191 instead of y=63, this is far more terrain than Amplified generates) and even then it only used around 600 MB out of 1024 allocated (with 512 MB I got the "hitching" you described, which coincided with memory usage dropping down, hence garbage collection; having said that, I do get occasional lag spikes which appear to be garbage collection, but only a few times per hour).
That said, I have noticed a bug in Optifine, at least in the 1.6.4 version I use; when I die the MultiplayerChunkCache (chunks loaded) value doubles, from 441 to 882, then triples on another death, etc; not an issue for me but with regular deaths it could cause memory issues (going through a portal seems to reset it).
It is also worth noting that the memory usage described by Optifine applies to 32 bit systems; 64 bit may use more from what I've heard (although not necessarily; an "int" data type is still 32 bits on a 64 bit system, bytes 8 bits, and so on).
TheMasterCaver's First World - possibly the most caved-out world in Minecraft history - includes world download.
TheMasterCaver's World - my own version of Minecraft largely based on my views of how the game should have evolved since 1.6.4.
Why do I still play in 1.6.4?
Why in the world would anyone need 32 chunk render distance (which hasn't been added by the way, unless it's another intel-only thing).
Also in 14w30b...I can run 16 chunk render distance again!! Max settings, excepting VBOs and v-sync.
Putting the CENDENT back in transcendent!
It was added. It's an option if at least 2 gigs of ram are allocated.
Yeah, I just found out after posting. Although why anyone would want to use it is a mystery to me.
Putting the CENDENT back in transcendent!
Why not? Why use a 16 chunk render distance over an 8 chunk render distance? If you can afford it performance-wise, why is having the option such a bad thing?
I've been using increased render render distances for a while now (I currently use 28 chunks with OptiFine), and anything under the default maximum of 16 chunks just seems more and more claustrophobic to me. It really opens up the world, so to speak, visually.
I know it's a bit of a Apples to Oranges comparison since you need OptiFine with 1.7 and prior for render distances above 16, but with the same render distance of 28, 1.7.10 performs nicely enough for me, whereas 14w30c seemingly spends much of it's time with frame rates in the 30s or 40s, which drags the general performance/experience down.
I know that might sound like a silly case of first world problems complaining about such render distances and all, but for a performance improving release, I'm left wondering not only where it is, but why it's arguably worse.
*sigh*
The good news is, the chunks load really fast, and at least so far, seemingly more reliably (I'll have to test more, but with OptiFine I often got patches of chunks missing in seldom visited areas, or any area on initial load when open to LAN).
Also, the issue with 1.7.2 to 1.7.10 with the horizon darkening (which does the opposite and brightens with higher render distances using OptiFine) appears to be gone, at least with higher render distances.
OS: Windows 8.1 x64Java 7 U67 64 bit
CPU: Intel i7-3630QM 2.4GHz
GPU: NVidia GTX 660M
RAM: 8GB, 6GB allocated to Minecraft
Resolution: 1920x1080
FPS:
1.7.10 - 117
14w29b, VBO off - 52
14w29b, VBO on - 66