Ores would likely generate in the same way as they do now. By default, a Tall Worlds world would appear similar to a Vanilla world. The only main difference would be the handling of the world. There would be options, however, to configure the ores and terrain to behave in different ways. For instance, you could have a much deeper world and have the ores stretched vertically to fit.
Well, yeah, the thing I'm wondering is how this will effect ores that you normally have to dig deep to get, like diamond. Should we expect diamonds at sea level? Or would we have to dig down one or two million blocks? Iron and coal are less common deep down, so if sea level is zero, does this mean you might have to dig up tall mountains for iron and coal, or will all the ore's spawn heights be shifted down 64 blocks? What happens if you dig farther down than -64? Will the spawn rates repeat? Will they change? Where is lava level going to be? This is important stuff!
The Meaning of Life, the Universe, and Everything.
Join Date:
3/11/2012
Posts:
119
Minecraft:
NickT33
Member Details
I really want this mod, however it's going to desperately need world edit support, as well as something to convert existing maps into a cubic chunks map.
Well, yeah, the thing I'm wondering is how this will effect ores that you normally have to dig deep to get, like diamond. Should we expect diamonds at sea level? Or would we have to dig down one or two million blocks? Iron and coal are less common deep down, so if sea level is zero, does this mean you might have to dig up tall mountains for iron and coal, or will all the ore's spawn heights be shifted down 64 blocks? What happens if you dig farther down than -64? Will the spawn rates repeat? Will they change? Where is lava level going to be? This is important stuff!
This has all been discussed in the thread already, and I just explained the basics of exactly what you're asking here. If you want a more in-depth explanation, use the search bar at the top of the page to find said discussion in the thread. It will help you catch up.
This has all been discussed in the thread already, and I just explained the basics of exactly what you're asking here. If you want a more in-depth explanation, use the search bar at the top of the page to find said discussion in the thread. It will help you catch up.
I'm well aware of it being discussed, but not in much detail. I still have questions about it.
Because Cuchaz moved house, something which never ever helps development. Now there are licensing issues which prevent him from actually getting anywhere with the mod. Welcome to the fun world of setbacks and disruptions that make you want to stick your thumbs in your eyes
Rollback Post to RevisionRollBack
I believe in the Invisible Pink Unicorn, bless her Invisible Pinkness.
Have you updated the mod, or is it still for 1.2.5? Given that you will be updating to 1.8, don't you think it would be better to update now, then write the rest in 1.7.10 (to minimize the stuff that needs to be changed in 1.8) instead of writing it in 1.2.5, then having to rewrite half the mod?
Have you updated the mod, or is it still for 1.2.5? Given that you will be updating to 1.8, don't you think it would be better to update now, then write the rest in 1.7.10 (to minimize the stuff that needs to be changed in 1.8) instead of writing it in 1.2.5, then having to rewrite half the mod?
He never started writing it for 1.2.5, I believe he started writing it for either 1.6.2, or 1.7.x
Because Cuchaz moved house, something which never ever helps development. Now there are licensing issues which prevent him from actually getting anywhere with the mod. Welcome to the fun world of setbacks and disruptions that make you want to stick your thumbs in your eyes
Also, other people have been working on cubic chunks in the meantime while Cuchaz deals with deobfuscation. Check here, and here, and even here.
Also, other people have been working on cubic chunks in the meantime while Cuchaz deals with deobfuscation. Check here, and here, and even here.
Yeah! Three other developers have been working on Tall Worlds Mod while I've been distracted by life and other annoying problems. =)
Btw, Tall Worlds Mod is currently being developed against 1.7.2, but that will probably change when we get new deobfuscation mappings. We'll either target 1.7.10 or 1.8.x depending on the timing.
It would be nice if we saw something like this in an upcoming update for vanilla, but at the same time I don't want to see all this work go down the drain.
Boy, sure wish I had been paying attention a month ago when all the cubic chunk mechanics was being discussed.
Oh well, better do it all at once, then.
=========== Introduction =============
Obviously, the problem with supporting cubic chunks isn't one of storage or encoding, but the implication that two things can and need to interact on a scale that you can't possibly hope to simulate. No matter how efficient and tightly optimized your per-tick code is, the Law of Cubes is going to make you her *****.
So what can we do about it? Well, if you look at other voxel engines (of the volumetric modelling variety, not the Minecraft block worlds), you'll notice that to the man, every implementation features very small voxels, to the point that individual voxels become indistinguishable from one another (except for Ken Silverman's Voxlap engine, but that one's a freak of the voxel world). Most of the time, the developer will go to great effort to hide the discreteness of the voxel structure. Because that detail is obscured, it opens opportunities for all manner of corner-cutting. Oct-tree storage (subdiving the grid only where there is a meaningful improvement in fidelity), google-maps style streaming detail, acceleration structures (mechanisms mixed into the data model to tell the render where it can take shortcuts, like large empty spaces. The entire discipline is about making voxel data smaller, faster, and more intelligent.
...Except for block worlds. Around here, we care that everything is sharply defined and cubic, and we also care about 1:1 detail, which is why we can't leverage all those state of the art techniques that make voxels actually a viable, scalable product. But what if we could? In an effort to play by the rules of the vanilla Minecraft physics engine, we are artificially making things harder on ourselves than they need to be. If you managed to get a 2km draw distance, you would find that individual blocks create a awful looking Moire pattern, since only one polygon can occupy a single screen-space pixel at a time (without a supersampling AA which is expensive doesn't really agree with Minecraft). So why should we have to draw blocks that way? If you make a 31-wide platform a mile up in the air, Minecraft says the shadow it casts would be completely dark in the center. Even if it were feasible, it would look ridiculous, so why are we trying to do that? Same goes for water, why should we consider water on a block-by-block basis if it becomes impossible to handle fluid dynamics on a large scale?
========== Lighting ===========
Minecraft lighting simulates a poor man's global illumination-the idea that rather than hard light and shadow, light bouncing off surfaces lights up everything nearby, making a smoothish gradient. The other principle tenant of global illumination is that the strength of light isn't a function of distance, but the arc-length of the source. If you have a nearby lightsouce and a giant wall of light in the distance, but both take up the same amount of your field of view, they are contributing the same amount of light to your position. The practical thing this means for us is that in terms of light contribution (excluding direct sunlight), the farther away a light source is-the larger it has to be in order to have any influence on a particular surface, meaning you can be working with lower and lower resolution models and getting the same effect. It would be difficult to tell whether you are inside or outside in a cave the size of a baseball stadium, but scaled down to a hut, it's not so difficult.
I mentioned sunlight didn't fit this model, and that's because the sun is so large and far away that its arc length doesn't change appreciably no matter where you are, so assuming you are not trying to simulate the sun moving through the sky, all you need to know is the heightmap to see if anything directly above you is occluding the sun to know whether you are in shadow, no big deal. But I also said before that it would be ridiculous for a small platform a mile up to be casting any appreciable shadow. The reason for this is that direct sunlight is not the only source of light. You also have ambient lighting. To keep thngs simple, ambient light is light coming from no place in particular. In our case, it's coming mostly from the scattered light in the sky and reflected light on the ground. If you are out in the open, it's the same no matter where you go. If you are under a tree, the leaves are blocking the sky so your only ambient light is coming from the horizon and reflected off the ground. In other words, when not directly lit by the sun, the amount of light hitting a block is the arc-area facing the sky, lit environment, or nearby point lights.
If you fully simulate the contribution of light from all these sources, and then recalculate again with those results, over and over until the new numbers stop changing appreciably, then you've just done global illumination, and the results will look hella good. On the other hand, it's also hella expensive, to the point that only recently has anyone even attempted doing it in realtime with the beefiest of GPUs. Additionally in the close range with block lights, and with direct sunlight, the current system isn't a terribly bad imitation. It's only those intermediate distances, where you're in the air, not next to any block light, but you're also not directly lit by sunlight, where the immitation starts breaking down. The lighting algorithm also spends a disproportionate amount of time lighting up these places that don't actually contribute anything. So why not scrap the idea that sunlit spaces need to radiate?
This next part is hard to explain, so I'm just going to give a rough outline for now. There is a complex system of equations in the world of mathematics called spherical harmonics, which are handy for representing directional lighting through a point in space. What it is is a set of equations which describe any arbitrary combination of directional magnitudes in 3D, translated to a set of coefficients. It's very similar to how you can recreate any sound by combining enough sine waves at the right frequency and amplitude. To apply lossy compression on a sound, you try to remove all the sine waves that don't significantly affect the result. Jpeg compression works more or less the same way, first converting color gradients to waves. The same applies to spherical harmonics. By adding or removing coefficients, you can increase or decrease the accuracy of the representation. What this means in practice is that we can represent light rays passing through a point from an infinite number of directions with a relative handful of parameters. Just don't go trying to simulate a laser pointer.
So now you can imagine a world filled with these things, one in the middle of each air block, each telling their neighbors how much light that vector is contributing. That takes care of getting light to travel in all directions, but it's still really expensive per point, and we're processing even more points than the old way. That's where the scaling rule comes into play. Light sources farther away need to be larger to contribute any meaningful light, so you can handle them at a lower resolution. As far as physical placement, this means larger open areas can be represented with fewer spheres. Out under the sky, they would be huge and paint with massive strokes, which is what you would expect from ambient light. We'd still use direct sunlight and point lighting, as well as scattering in the places where sunlight actually strikes (not simply passing through), but the big empty spaces are now vacuums that light simply passes through.
There are a number of considerations with this, on top of the math being fairly gnarly, like what arrangement and density of spheres gives us the best bang for our buck, and will this simulation work if there is a large difference in brightness between the sun and block lighting? Additionally, though it would be less noticeable, we'd need to simulate light propagation independently for block lights. Individual points of light die off pretty quickly, but in a big room full of point lights, the numerous little contributions add up and even-out the light levels (this is floating point math, so tiny contributions are recorded just fine so long as all the contributions are in that same range, which is what we want). With the right tweaking, and especially using the assumption that the sun rays are stationary, there's no reason this shouldn't be feasible, at least server-side. Distant changes are also less noticeable, so there is no reason long-distance light propagation has to be done all at once, either.
=========== Block Rendering ============
I won't go into great detail here, because this is part of my active research and there's too much to cover. Block geometry takes up a lot of space on a GPU. If memory serves, normal vertex stores world coordinates, texture coordinates, an additional texture coordinate for lighting, and a color for biome-sensitive blocks like grass, and a couple bytes of padding, multiplied by 4 vertices per face (and block faces never share vertices) adds up to 128 bytes per block face. That's a lot, and you run out fairly quickly, which is why the draw distance is so low. The worst part is, thanks to that law of cubes, every bit of range you increase the draw distance by increases the memory footprint geometrically, which no amount of culling or optimization is going to offset for long. What we need here, and what all those other voxel engines do, is implement level-of-detail.
The obvious problem behind LOD is that the block world environment wears its discrete details like a flag, and you can't simply reduce the geometry in the blocky parts without it being spectacularly obvious. If the texture doesn't change, you can join up adjacent block faces in the same plane to use fewer polygons. Nocte does this for Hexahedra, and it's been used by a couple other block world engines as well. Since lighting was being stored at each vertex, this can pose a problem when not every tile has its own vertices. One engine solved that by using a dense 3D texture of light values and married the light and geometry via shader (though a dense 3d lightmap is pretty heavy in its own right). You can also solve it by only joining tiles wherever you can maintain the original gradient (significantly less efficient but by definition still more efficient than having a vertex at each tile corner). I don't recall how Nocte solved it, I'll have to look later.
In any case, this is a nice way to shave off some geometry that doesn't really cost anything after the initial tessellation. Another complementary approach is occlusion culing. If you can guarantee some geometry will not be visible to the player, you don't need to draw that chunk, which improves GPU. If you wanted, you could even avoid loading the geometry onto the GPU until it became visible. Since most of the below-ground geometry will never be seen by the player, this is an attractive option. Furthermore, the latest snapshot actually has an implementation of software occlusion culling, so this wouldn't be all that difficult to add.
Still, none of this addresses the fact that we can't reduce the detail of visible geometry without it being obvious if we're using polygons. My research has been with finding a way to use shaders and raymarching to at least pretend that the blockiness is still present. I've had quite a bit of success on the front with natural heightmaps. Thanks to the way bilinear interpolation works, I can algebraicly solve for the intersection with the smooth height surface and the camera ray, and from there terrace and then blockify the results to minimize the amount of fishing around the ray marcher has to do. At this point it's cheap enough that it really doesn't impact the fps, but there's still more to add before it's complete.
Trees, buildings, and other volumetric structures can't be drawn with this system, and I don't have a great solution for that. My hope is that by saving enough geometry with the above systems, we'll have enough left in the budget to do some regular polygon rendering way out in the distance. There are also a number of techniques for generalizing a voxel model into a simpler voxel model, but in this case, the texture really needs to be preserved, and I don't have a great way of efficiently mapping a procedural texture to an arbitrary mesh. Ways do exist, but they are complicated and expensive. A different thing we can try is a forced depth of field, like what they did in Far Cry 3 to obscure their low LOD environment. Once the blockyness becomes undetectable, tons of options for reducing LOD with smooth meshes become possible.
=================== Meta Layers ======================
One of the earlier discussions was about fluid dynamics and how that might work. Back in the /indev days of Minecraft, water was finite and could be moved around or removed from an area by letting it dry out. The surrounding sea worked as infinite source blocks, and any water block connecting to the sea became an infinite source as well. It's hard to convey just how great that was if you haven't tried it. I wholeheartedly recommend dialing back your launcher to /indev and giving a try. Instead of annoying streams running down tunnels, you could completely flood a cave if you broke through the wrong wall, and a lot of the caves were already flooded, so to explore you'd have to dive in, and pop up in other chambers. We'd use sand held up with torches to act as floodgates since doors and signs didn't exist yet. It was seriously the best thing ever.
Once maps became unbounded, this was removed, since there was no convenient way to work out how much water you were interracting with at any given moment. But what if there was? Just like how we could lower the detail with light propagation and geometry, what if we could do the same thing with lakes and oceans? Imagine if instead of trying to count all the individual water blocks in a lake, you have a single object which says "this is a lake. It contains X gallons of water and is fully contained in region A, excluding regions B and C. Any water block there is part of the lake". Now you know how much water is in the lake, and what that will do to the water level if you channel part of that into a pond. This is also data you can keep in memory and interract with even if the chunks are unloaded from memory. If you wanted to get fancy with the fluid dynamics, now that you know the region, you can further subdivide the area to calculate flow through bottlenecks, and also calculate what will happen even if the water source is kilometers away without loading all the chunks in between (assuming we aren't worrying about water interracting with anything along the way, like redstone, but then you could model redstone circuits the same way).
For Futurecraft's engine, I have layers for any sort of thing that can't reasonably be accomodated on a per-chunk basis or doesn't fit the 1:1 block scale (like the lighting). These layers can be loaded and streamed from the server independently from the chunk layer, so long as there is no direct interraction while no players are present. Tying everything to the blocks and chunks was the biggest mistake Minecraft makes. You have a physical meta layer, but you have to first find out what each block is before you can interpret its meta. Ideally, any blocks that interract in a system, like redstone or buildcraft pipes or whatnot, should at least partially live in a data model where every component knows precisely how it relates to the rest of the system, as well as any additional system info (like current running through a stretch of wire). Otherwise you're just spamming the chunk cache just to work out basic relationships, or wasting more expensive tile entities simply to store a couple extra piecs of data.
That's a basic overview of about three years worth of research and forum discussions on the subject of cubic chunks and scaling Minecraft. There have been a lot of interesting and clever proposals which I wasn't able to get into, but the above ones are my favorite, and are what I'm actively attempting to implement for my own project.
If anyone's interested in any of these ideas, I can go into greater detail about how it would work and the challenges involved. Minecrak, you were involved in Robinton's CC thread a lot more than I was, if you can recall some of the other proposals that were made over there, let me know, my memory's a little fuzzy.
TL;DR NASA's impossible propulsion drive may be explained with a few small adjustments to the standard model.
Where have you been while this project was going on man? Those arguments would have been massively helpful at the beginning of things, when people were still doubting that such a project could even exist. I think it would be excellent if you could begin working on Tall Worlds with the rest of the crew and attempt to implement those ideas into the mod. This could bring Tall Worlds onto an even higher level of optimization than it already was. It would put the latest snapshots of Vanilla to shame!
NASA's impossible propulsion drive may be explained with a few small adjustments to the standard model.
Precisely! We just need about 1 Kg of positrons. That should probably fix things.
Anyway, it's awesome to finally have a graphics researcher in here. My background is in algorithms and computational geometry, but actual rendering is quite a bit more specialized. I've been pretending to know what I'm doing in the render code, but it'll be nice to have a rendering expert hanging around.
Let's do this one idea at a time. First, lighting. Minecraft's infinitely long shadows are pretty ridiculous. Ambient scattering should make the shadows fall off over distance, but that's hard to do without something expensive like global illumination. I've been trying to think about some kind of algorithmic hack that could simulate scattering somehow, but I don't have any good ideas yet. Trying to implement calculations using spherical harmonics is probably overkill for Minecraft. =) We don't actually need to compress the representation of multiple light sources into a few coefficients since for sunlight, there's only one direction we care about. Unless you want to represent scattered light using functions over S^2, but then we're back to expensive iterative things like global illumination. Maybe if we choose the right network of light "sources" (ie a set of coefficients at a point) by computing some kind of network/graph over the terrain, we could keep lighting efficient, but that sounds like a non-trivial problem too.
On the other hand, representing all lighting as a graph of coefficients at points could mean we could do away with saving explicit light values at each block. That could improve cube read/write/transfer times significantly. Then lighting for each block could be calculated on demand. Btw, my "cube" term means a 16x16x16 array of blocks.
It might work, but it sounds like a HUGE departure from the existing lighting system. There's gotta be an easier way.
Next, onto LOD rendering and occlusion culling. I think some of the latest 1.8 snapshots added support for vertex buffers. That (I think) should replace the call lists that were originally used for rendering and make things a lot more efficient. I don't want to spend too much time trying to work on rendering efficiency now if there are already fixes coming down the pipe.
If we want to increase the render distance though, we definitely need some kind of LOD. Even if we try to brute force the rendering, we're going to end up with checkerboard objects in the distance because the blocks just get too small. LOD also goes hand-in-hand with the cube loading. Cubes have to be loaded somewhere to be rendered. If we only load far cubes on the server, then we can send some kind of less-detailed representation to the client, but what's the right representation to choose? It seems tempting to take advantage of the directional nature of far cubes (ie, a slow-moving player will only see a cube from essentially one viewpoint), but then we can't do that efficiently on a multi-player server without having the server do the rendering for the client. There needs to be some kind of direction-independent simplification for a cube. I don't really care about preserving blockiness so much, as long as we can just put up some color in the background of a viewport and have it look reasonably good.
As for occlusion culling, I was eventually planning to come up with some kind of data structure that could answer visibility queries efficiently. Then the server could only send cubes that were visible to a player (or close enough to be heard regardless of visibility). I haven't really started thinking about this much though.
Where have you been while this project was going on man? Those arguments would have been massively helpful at the beginning of things, when people were still doubting that such a project could even exist. I think it would be excellent if you could begin working on Tall Worlds with the rest of the crew and attempt to implement those ideas into the mod. This could bring Tall Worlds onto an even higher level of optimization than it already was. It would put the latest snapshots of Vanilla to shame!
Oh wait, I just came up with an awesome idea for a LOD cube simplification. It's SOOO Minecrafty, it might just be perfect.
How about we just simplify a whole 16x16x16 cube into a single block?
We can pick the modal block in the cube to represent the cube. Mountains in the distance are going to look ridiculously blocky, and I kinda think that's going to be completely awesome. We could also store the cube-as-block things extremely efficiently in the terrain database so the server doesn't have to load full cubes all the time to do the simplification. We can just maintain the cube-as-block things when we update the full-resolution block terrain. Sending them to the client is going to be super easy. We can't get around the law of cubes, but sending n/16/16/16 blocks instead of n will go a long way.
How about we just simplify a whole 16x16x16 cube into a single block?
I'm sitting here giggling over the fact that this idea just now occurred to you. It's been tossed around on the Cubic Chunks thread so much I've started ignoring when people mention it. I've always thought this was the best approach, but there were always bigots in that thread that refused to allow their 30k tall mountains to look any less detailed than Minecraft's blocks allowed for....
I'm sitting here giggling over the fact that this idea just now occurred to you. It's been tossed around on the Cubic Chunks thread so much I've started ignoring when people mention it. I've always thought this was the best approach, but there were always bigots in that thread that refused to allow their 30k tall mountains to look any less detailed than Minecraft's blocks allowed for....
There might have been some good ideas in that thread, but I just can't sift trough all 200+ pages to find them. If we end up re-inventing some ideas here, so be it.
If those bigots want to write a rendering engine, they're welcome to do whatever the hell they want. Since I'm running this show though, I'm going to start with something simple.
Well, yeah, the thing I'm wondering is how this will effect ores that you normally have to dig deep to get, like diamond. Should we expect diamonds at sea level? Or would we have to dig down one or two million blocks? Iron and coal are less common deep down, so if sea level is zero, does this mean you might have to dig up tall mountains for iron and coal, or will all the ore's spawn heights be shifted down 64 blocks? What happens if you dig farther down than -64? Will the spawn rates repeat? Will they change? Where is lava level going to be? This is important stuff!
This has all been discussed in the thread already, and I just explained the basics of exactly what you're asking here. If you want a more in-depth explanation, use the search bar at the top of the page to find said discussion in the thread. It will help you catch up.
I'm well aware of it being discussed, but not in much detail. I still have questions about it.
Because Cuchaz moved house, something which never ever helps development. Now there are licensing issues which prevent him from actually getting anywhere with the mod. Welcome to the fun world of setbacks and disruptions that make you want to stick your thumbs in your eyes
I believe in the Invisible Pink Unicorn, bless her Invisible Pinkness.
Because I'm working on this.
Let's do some math.
1/3 = 0.333...
1/3 * 3 = 1
0.333... * 3 = 0.999...
1 = 0.999...
1 - 0.999... = 0.999... - 0.999...
0.0...1 = 0
0.0...1 * 10... = 0 * 10...
1 = 0
He never started writing it for 1.2.5, I believe he started writing it for either 1.6.2, or 1.7.x
Edit: Yeah, he started it for 1.7.2
Also, other people have been working on cubic chunks in the meantime while Cuchaz deals with deobfuscation. Check here, and here, and even here.
Yeah! Three other developers have been working on Tall Worlds Mod while I've been distracted by life and other annoying problems. =)
Btw, Tall Worlds Mod is currently being developed against 1.7.2, but that will probably change when we get new deobfuscation mappings. We'll either target 1.7.10 or 1.8.x depending on the timing.
Let's do some math.
1/3 = 0.333...
1/3 * 3 = 1
0.333... * 3 = 0.999...
1 = 0.999...
1 - 0.999... = 0.999... - 0.999...
0.0...1 = 0
0.0...1 * 10... = 0 * 10...
1 = 0
Oh well, better do it all at once, then.
=========== Introduction =============
Obviously, the problem with supporting cubic chunks isn't one of storage or encoding, but the implication that two things can and need to interact on a scale that you can't possibly hope to simulate. No matter how efficient and tightly optimized your per-tick code is, the Law of Cubes is going to make you her *****.
So what can we do about it? Well, if you look at other voxel engines (of the volumetric modelling variety, not the Minecraft block worlds), you'll notice that to the man, every implementation features very small voxels, to the point that individual voxels become indistinguishable from one another (except for Ken Silverman's Voxlap engine, but that one's a freak of the voxel world). Most of the time, the developer will go to great effort to hide the discreteness of the voxel structure. Because that detail is obscured, it opens opportunities for all manner of corner-cutting. Oct-tree storage (subdiving the grid only where there is a meaningful improvement in fidelity), google-maps style streaming detail, acceleration structures (mechanisms mixed into the data model to tell the render where it can take shortcuts, like large empty spaces. The entire discipline is about making voxel data smaller, faster, and more intelligent.
...Except for block worlds. Around here, we care that everything is sharply defined and cubic, and we also care about 1:1 detail, which is why we can't leverage all those state of the art techniques that make voxels actually a viable, scalable product. But what if we could? In an effort to play by the rules of the vanilla Minecraft physics engine, we are artificially making things harder on ourselves than they need to be. If you managed to get a 2km draw distance, you would find that individual blocks create a awful looking Moire pattern, since only one polygon can occupy a single screen-space pixel at a time (without a supersampling AA which is expensive doesn't really agree with Minecraft). So why should we have to draw blocks that way? If you make a 31-wide platform a mile up in the air, Minecraft says the shadow it casts would be completely dark in the center. Even if it were feasible, it would look ridiculous, so why are we trying to do that? Same goes for water, why should we consider water on a block-by-block basis if it becomes impossible to handle fluid dynamics on a large scale?
========== Lighting ===========
Minecraft lighting simulates a poor man's global illumination-the idea that rather than hard light and shadow, light bouncing off surfaces lights up everything nearby, making a smoothish gradient. The other principle tenant of global illumination is that the strength of light isn't a function of distance, but the arc-length of the source. If you have a nearby lightsouce and a giant wall of light in the distance, but both take up the same amount of your field of view, they are contributing the same amount of light to your position. The practical thing this means for us is that in terms of light contribution (excluding direct sunlight), the farther away a light source is-the larger it has to be in order to have any influence on a particular surface, meaning you can be working with lower and lower resolution models and getting the same effect. It would be difficult to tell whether you are inside or outside in a cave the size of a baseball stadium, but scaled down to a hut, it's not so difficult.
I mentioned sunlight didn't fit this model, and that's because the sun is so large and far away that its arc length doesn't change appreciably no matter where you are, so assuming you are not trying to simulate the sun moving through the sky, all you need to know is the heightmap to see if anything directly above you is occluding the sun to know whether you are in shadow, no big deal. But I also said before that it would be ridiculous for a small platform a mile up to be casting any appreciable shadow. The reason for this is that direct sunlight is not the only source of light. You also have ambient lighting. To keep thngs simple, ambient light is light coming from no place in particular. In our case, it's coming mostly from the scattered light in the sky and reflected light on the ground. If you are out in the open, it's the same no matter where you go. If you are under a tree, the leaves are blocking the sky so your only ambient light is coming from the horizon and reflected off the ground. In other words, when not directly lit by the sun, the amount of light hitting a block is the arc-area facing the sky, lit environment, or nearby point lights.
If you fully simulate the contribution of light from all these sources, and then recalculate again with those results, over and over until the new numbers stop changing appreciably, then you've just done global illumination, and the results will look hella good. On the other hand, it's also hella expensive, to the point that only recently has anyone even attempted doing it in realtime with the beefiest of GPUs. Additionally in the close range with block lights, and with direct sunlight, the current system isn't a terribly bad imitation. It's only those intermediate distances, where you're in the air, not next to any block light, but you're also not directly lit by sunlight, where the immitation starts breaking down. The lighting algorithm also spends a disproportionate amount of time lighting up these places that don't actually contribute anything. So why not scrap the idea that sunlit spaces need to radiate?
This next part is hard to explain, so I'm just going to give a rough outline for now. There is a complex system of equations in the world of mathematics called spherical harmonics, which are handy for representing directional lighting through a point in space. What it is is a set of equations which describe any arbitrary combination of directional magnitudes in 3D, translated to a set of coefficients. It's very similar to how you can recreate any sound by combining enough sine waves at the right frequency and amplitude. To apply lossy compression on a sound, you try to remove all the sine waves that don't significantly affect the result. Jpeg compression works more or less the same way, first converting color gradients to waves. The same applies to spherical harmonics. By adding or removing coefficients, you can increase or decrease the accuracy of the representation. What this means in practice is that we can represent light rays passing through a point from an infinite number of directions with a relative handful of parameters. Just don't go trying to simulate a laser pointer.
So now you can imagine a world filled with these things, one in the middle of each air block, each telling their neighbors how much light that vector is contributing. That takes care of getting light to travel in all directions, but it's still really expensive per point, and we're processing even more points than the old way. That's where the scaling rule comes into play. Light sources farther away need to be larger to contribute any meaningful light, so you can handle them at a lower resolution. As far as physical placement, this means larger open areas can be represented with fewer spheres. Out under the sky, they would be huge and paint with massive strokes, which is what you would expect from ambient light. We'd still use direct sunlight and point lighting, as well as scattering in the places where sunlight actually strikes (not simply passing through), but the big empty spaces are now vacuums that light simply passes through.
There are a number of considerations with this, on top of the math being fairly gnarly, like what arrangement and density of spheres gives us the best bang for our buck, and will this simulation work if there is a large difference in brightness between the sun and block lighting? Additionally, though it would be less noticeable, we'd need to simulate light propagation independently for block lights. Individual points of light die off pretty quickly, but in a big room full of point lights, the numerous little contributions add up and even-out the light levels (this is floating point math, so tiny contributions are recorded just fine so long as all the contributions are in that same range, which is what we want). With the right tweaking, and especially using the assumption that the sun rays are stationary, there's no reason this shouldn't be feasible, at least server-side. Distant changes are also less noticeable, so there is no reason long-distance light propagation has to be done all at once, either.
=========== Block Rendering ============
I won't go into great detail here, because this is part of my active research and there's too much to cover. Block geometry takes up a lot of space on a GPU. If memory serves, normal vertex stores world coordinates, texture coordinates, an additional texture coordinate for lighting, and a color for biome-sensitive blocks like grass, and a couple bytes of padding, multiplied by 4 vertices per face (and block faces never share vertices) adds up to 128 bytes per block face. That's a lot, and you run out fairly quickly, which is why the draw distance is so low. The worst part is, thanks to that law of cubes, every bit of range you increase the draw distance by increases the memory footprint geometrically, which no amount of culling or optimization is going to offset for long. What we need here, and what all those other voxel engines do, is implement level-of-detail.
The obvious problem behind LOD is that the block world environment wears its discrete details like a flag, and you can't simply reduce the geometry in the blocky parts without it being spectacularly obvious. If the texture doesn't change, you can join up adjacent block faces in the same plane to use fewer polygons. Nocte does this for Hexahedra, and it's been used by a couple other block world engines as well. Since lighting was being stored at each vertex, this can pose a problem when not every tile has its own vertices. One engine solved that by using a dense 3D texture of light values and married the light and geometry via shader (though a dense 3d lightmap is pretty heavy in its own right). You can also solve it by only joining tiles wherever you can maintain the original gradient (significantly less efficient but by definition still more efficient than having a vertex at each tile corner). I don't recall how Nocte solved it, I'll have to look later.
In any case, this is a nice way to shave off some geometry that doesn't really cost anything after the initial tessellation. Another complementary approach is occlusion culing. If you can guarantee some geometry will not be visible to the player, you don't need to draw that chunk, which improves GPU. If you wanted, you could even avoid loading the geometry onto the GPU until it became visible. Since most of the below-ground geometry will never be seen by the player, this is an attractive option. Furthermore, the latest snapshot actually has an implementation of software occlusion culling, so this wouldn't be all that difficult to add.
Still, none of this addresses the fact that we can't reduce the detail of visible geometry without it being obvious if we're using polygons. My research has been with finding a way to use shaders and raymarching to at least pretend that the blockiness is still present. I've had quite a bit of success on the front with natural heightmaps. Thanks to the way bilinear interpolation works, I can algebraicly solve for the intersection with the smooth height surface and the camera ray, and from there terrace and then blockify the results to minimize the amount of fishing around the ray marcher has to do. At this point it's cheap enough that it really doesn't impact the fps, but there's still more to add before it's complete.
Trees, buildings, and other volumetric structures can't be drawn with this system, and I don't have a great solution for that. My hope is that by saving enough geometry with the above systems, we'll have enough left in the budget to do some regular polygon rendering way out in the distance. There are also a number of techniques for generalizing a voxel model into a simpler voxel model, but in this case, the texture really needs to be preserved, and I don't have a great way of efficiently mapping a procedural texture to an arbitrary mesh. Ways do exist, but they are complicated and expensive. A different thing we can try is a forced depth of field, like what they did in Far Cry 3 to obscure their low LOD environment. Once the blockyness becomes undetectable, tons of options for reducing LOD with smooth meshes become possible.
=================== Meta Layers ======================
One of the earlier discussions was about fluid dynamics and how that might work. Back in the /indev days of Minecraft, water was finite and could be moved around or removed from an area by letting it dry out. The surrounding sea worked as infinite source blocks, and any water block connecting to the sea became an infinite source as well. It's hard to convey just how great that was if you haven't tried it. I wholeheartedly recommend dialing back your launcher to /indev and giving a try. Instead of annoying streams running down tunnels, you could completely flood a cave if you broke through the wrong wall, and a lot of the caves were already flooded, so to explore you'd have to dive in, and pop up in other chambers. We'd use sand held up with torches to act as floodgates since doors and signs didn't exist yet. It was seriously the best thing ever.
Once maps became unbounded, this was removed, since there was no convenient way to work out how much water you were interracting with at any given moment. But what if there was? Just like how we could lower the detail with light propagation and geometry, what if we could do the same thing with lakes and oceans? Imagine if instead of trying to count all the individual water blocks in a lake, you have a single object which says "this is a lake. It contains X gallons of water and is fully contained in region A, excluding regions B and C. Any water block there is part of the lake". Now you know how much water is in the lake, and what that will do to the water level if you channel part of that into a pond. This is also data you can keep in memory and interract with even if the chunks are unloaded from memory. If you wanted to get fancy with the fluid dynamics, now that you know the region, you can further subdivide the area to calculate flow through bottlenecks, and also calculate what will happen even if the water source is kilometers away without loading all the chunks in between (assuming we aren't worrying about water interracting with anything along the way, like redstone, but then you could model redstone circuits the same way).
For Futurecraft's engine, I have layers for any sort of thing that can't reasonably be accomodated on a per-chunk basis or doesn't fit the 1:1 block scale (like the lighting). These layers can be loaded and streamed from the server independently from the chunk layer, so long as there is no direct interraction while no players are present. Tying everything to the blocks and chunks was the biggest mistake Minecraft makes. You have a physical meta layer, but you have to first find out what each block is before you can interpret its meta. Ideally, any blocks that interract in a system, like redstone or buildcraft pipes or whatnot, should at least partially live in a data model where every component knows precisely how it relates to the rest of the system, as well as any additional system info (like current running through a stretch of wire). Otherwise you're just spamming the chunk cache just to work out basic relationships, or wasting more expensive tile entities simply to store a couple extra piecs of data.
================= Summary =============================
That's a basic overview of about three years worth of research and forum discussions on the subject of cubic chunks and scaling Minecraft. There have been a lot of interesting and clever proposals which I wasn't able to get into, but the above ones are my favorite, and are what I'm actively attempting to implement for my own project.
If anyone's interested in any of these ideas, I can go into greater detail about how it would work and the challenges involved. Minecrak, you were involved in Robinton's CC thread a lot more than I was, if you can recall some of the other proposals that were made over there, let me know, my memory's a little fuzzy.
TL;DR NASA's impossible propulsion drive may be explained with a few small adjustments to the standard model.
I would be more interested in your details regarding the meta-layers and LOD block rendering, fr0stbyte.
Where have you been while this project was going on man? Those arguments would have been massively helpful at the beginning of things, when people were still doubting that such a project could even exist. I think it would be excellent if you could begin working on Tall Worlds with the rest of the crew and attempt to implement those ideas into the mod. This could bring Tall Worlds onto an even higher level of optimization than it already was. It would put the latest snapshots of Vanilla to shame!
Precisely! We just need about 1 Kg of positrons. That should probably fix things.
Anyway, it's awesome to finally have a graphics researcher in here. My background is in algorithms and computational geometry, but actual rendering is quite a bit more specialized. I've been pretending to know what I'm doing in the render code, but it'll be nice to have a rendering expert hanging around.
Let's do this one idea at a time. First, lighting. Minecraft's infinitely long shadows are pretty ridiculous. Ambient scattering should make the shadows fall off over distance, but that's hard to do without something expensive like global illumination. I've been trying to think about some kind of algorithmic hack that could simulate scattering somehow, but I don't have any good ideas yet. Trying to implement calculations using spherical harmonics is probably overkill for Minecraft. =) We don't actually need to compress the representation of multiple light sources into a few coefficients since for sunlight, there's only one direction we care about. Unless you want to represent scattered light using functions over S^2, but then we're back to expensive iterative things like global illumination. Maybe if we choose the right network of light "sources" (ie a set of coefficients at a point) by computing some kind of network/graph over the terrain, we could keep lighting efficient, but that sounds like a non-trivial problem too.
On the other hand, representing all lighting as a graph of coefficients at points could mean we could do away with saving explicit light values at each block. That could improve cube read/write/transfer times significantly. Then lighting for each block could be calculated on demand. Btw, my "cube" term means a 16x16x16 array of blocks.
It might work, but it sounds like a HUGE departure from the existing lighting system. There's gotta be an easier way.
Next, onto LOD rendering and occlusion culling. I think some of the latest 1.8 snapshots added support for vertex buffers. That (I think) should replace the call lists that were originally used for rendering and make things a lot more efficient. I don't want to spend too much time trying to work on rendering efficiency now if there are already fixes coming down the pipe.
If we want to increase the render distance though, we definitely need some kind of LOD. Even if we try to brute force the rendering, we're going to end up with checkerboard objects in the distance because the blocks just get too small. LOD also goes hand-in-hand with the cube loading. Cubes have to be loaded somewhere to be rendered. If we only load far cubes on the server, then we can send some kind of less-detailed representation to the client, but what's the right representation to choose? It seems tempting to take advantage of the directional nature of far cubes (ie, a slow-moving player will only see a cube from essentially one viewpoint), but then we can't do that efficiently on a multi-player server without having the server do the rendering for the client. There needs to be some kind of direction-independent simplification for a cube. I don't really care about preserving blockiness so much, as long as we can just put up some color in the background of a viewport and have it look reasonably good.
As for occlusion culling, I was eventually planning to come up with some kind of data structure that could answer visibility queries efficiently. Then the server could only send cubes that were visible to a player (or close enough to be heard regardless of visibility). I haven't really started thinking about this much though.
Yup, you should work on the mod with us. =)
How about we just simplify a whole 16x16x16 cube into a single block?
We can pick the modal block in the cube to represent the cube. Mountains in the distance are going to look ridiculously blocky, and I kinda think that's going to be completely awesome. We could also store the cube-as-block things extremely efficiently in the terrain database so the server doesn't have to load full cubes all the time to do the simplification. We can just maintain the cube-as-block things when we update the full-resolution block terrain. Sending them to the client is going to be super easy. We can't get around the law of cubes, but sending n/16/16/16 blocks instead of n will go a long way.
I'm sitting here giggling over the fact that this idea just now occurred to you. It's been tossed around on the Cubic Chunks thread so much I've started ignoring when people mention it. I've always thought this was the best approach, but there were always bigots in that thread that refused to allow their 30k tall mountains to look any less detailed than Minecraft's blocks allowed for....
There might have been some good ideas in that thread, but I just can't sift trough all 200+ pages to find them. If we end up re-inventing some ideas here, so be it.
If those bigots want to write a rendering engine, they're welcome to do whatever the hell they want. Since I'm running this show though, I'm going to start with something simple.