When a block is changed in a chunk, that chunk should reset a timer to 2 minutes.. When the timer fires, that chunk should be saved directly to the hard drive and the ram should be de-allocated, but the actual model of the chunk should remain and still be drawn. When access, if it's not loaded, it should load it in from the hard drive into ram again.
Wouldn't this, essentially, cut the ram usage of minecraft down considerably with only a slight hit when you mine into an old chunk, while still allowing redstone and such to function perfectly fine?
The 3D model would already have been sent to the GPU, hopefully, and so a chunk would essentially take zero cpu time, but can still be queried as a whole, in which case it would load its tiles into memory from the hard drive. It's reverse-caching, because minecraft is too heavy on the ram (especially the server).
I do understand that HDD access is much slower than ram (although on computers with a decent ssd such as mine, the difference really drops), but seeing as almost all the chunks in a map are most often not being modified, it makes no sense to keep them all in ram.
Additionally, when I was making my minecraft clone, I actually found it faster to render the very-near chunks in real-time, because the amount of light/water/place/destroy actions taking place near a player made it faster to simply render in real-time than to keep re-writing and re-sending a model.
Finally, I always wondered why non-functioning chunks of a lower resolution couldn't be displayed outside of our current render level? Essentially, just past Normal distance would be chunks in which a block is the size of 2x2x2 normal blocks, meaning a chunk has 1/8 as many blocks. You wouldn't notice too much, but it would allow you to see absolutely huge distances with, again, little lag.
I think, right now, minecraft's horrible performance should be a mojang priority, especially with the server+client merge. The latest snapshots are really quite slow. It's too heavy on the CPU, and they need to balance it out more- Send some work to the hard drive, maybe use threads a little more intelligently, and when you're playing single-player, just have the LAN server that you're connected to in single player run on a separate thread but accessing the same region of ram..
I'm sure there's a lot of things that could be done, and I read that they will be working on the performance for 1.4, but they might lose some people if they release a game that half the userbase can't render on anything further than short.
When a block is changed in a chunk, that chunk should reset a timer to 2 minutes.. When the timer fires, that chunk should be saved directly to the hard drive and the ram should be de-allocated, but the actual model of the chunk should remain and still be drawn. When access, if it's not loaded, it should load it in from the hard drive into ram again.
Wouldn't this, essentially, cut the ram usage of minecraft down considerably with only a slight hit when you mine into an old chunk, while still allowing redstone and such to function perfectly fine?
I do understand that HDD access is much slower than ram (although on computers with a decent ssd such as mine, the difference really drops), but seeing as almost all the chunks in a map are most often not being modified, it makes no sense to keep them all in ram.
Additionally, when I was making my minecraft clone, I actually found it faster to render the very-near chunks in real-time, because the amount of light/water/place/destroy actions taking place near a player made it faster to simply render in real-time than to keep re-writing and re-sending a model.
Finally, I always wondered why non-functioning chunks of a lower resolution couldn't be displayed outside of our current render level? Essentially, just past Normal distance would be chunks in which a block is the size of 2x2x2 normal blocks, meaning a chunk has 1/8 as many blocks. You wouldn't notice too much, but it would allow you to see absolutely huge distances with, again, little lag.
I think, right now, minecraft's horrible performance should be a mojang priority, especially with the server+client merge. The latest snapshots are really quite slow. It's too heavy on the CPU, and they need to balance it out more- Send some work to the hard drive, maybe use threads a little more intelligently, and when you're playing single-player, just have the LAN server that you're connected to in single player run on a separate thread but accessing the same region of ram..
I'm sure there's a lot of things that could be done, and I read that they will be working on the performance for 1.4, but they might lose some people if they release a game that half the userbase can't render on anything further than short.