Minecraft for the XBox 360 does not require XBox Live.
With very few exceptions, development is aimed at what the lowest version of the system shipped with, not what you can buy for $20. (by the way, flash drives are not hard drives in ways too numerous to explain here) That's why games on the XBox 360 do not require the player to have a keyboard, even though you can get a suitable keyboard for $5. The development (and approval) target is the game as shipped.
It's not about what the RAM can handle. And it's not about "today's standards" either. You can implement virtual memory on a machine using cores, if you really want to, and it's probably been done. The DEC VAX, back in the early 1980s, supported virtual memory on a hardware level, making for some interesting addressing for someone used to much simpler assembly languages. It's about what the CPU is set up for, and whether there is a place to put the data that is swapped out of real memory.
XBLA downloads are kind of an odd thing, because they can have more specific system requirements than shrink-wrapped games. That doesn't change the fact that if you buy a game in the store for the XBox 360, you can run it on your XBox 360, whatever flavor of 360 you have.
Consoles are meant to be "appliance" type game devices. You want to play a game, you buy the game, you play it. They're accessible to the kind of people who wouldn't know what a GPU was if they found one in their corn flakes, and if you asked them how much RAM they had, they'd say they weren't sheep farmers. As a PC gamer, I've spent years studying the system requirements on boxes to ensure that my computer of the time would be able to run what was in that box. As a console gamer, I'm quite happy to know that the little green box I buy in GameStop (or at the flea market) will run on my 360, because it says XBox 360 on the box and that's all that matters.
But, again, that imposes limitations on the developers. That game in that box has to run on my 360 whether I bought it 8 years ago or just last week. And one of those limitations is that of virtual memory. I'm not going to research whether virtual memory management is practical on the 360; given that I've seen it done on a Sinclair ZX-81, it's certainly possible. But given that the minimum 360 specs do not require a storage medium that could be used for memory swapping, it's not a part of the target platform. It can't be required.
i see what our saying, memory is definitely an issue and not having a reliable memory for virtual memory is definitely an issue, im jsut saying ram is also a determining factor since how fast it has to unload the ram into the virtual memory is reliant on how much ram you have. when data can only unload and upload so fast then the amount of ram can be a huge limit.
the xbox to my understanding utilizes unified memory thus the only memory you actually have access to would be the HD, so hardware based virtual memory is out the window.
really its that virtual memory was just thrown out the window at some point. likely to save cost in production.
The bottom line is that if it was practical to use larger worlds on the XBox 360, 4J would have done it. The programmers at 4J would not be in the line of work they're in if they were less competent than a random 13-year-old boy on the Internet.The reason world size is not just finite but small on the 360 is because the platform limitations prevent it from being otherwise, whatever those exact limitations may be.
It's not small because 4J wants it to be that way.
It's not small because 4J can't do obvious things to make it large/infinite.
It's small because it can't be large, given the hardware, software, and platform-owner-imposed limitations of the 360.
i see what our saying, memory is definitely an issue and not having a reliable memory for virtual memory is definitely an issue, im jsut saying ram is also a determining factor since how fast it has to unload the ram into the virtual memory is reliant on how much ram you have. when data can only unload and upload so fast then the amount of ram can be a huge limit.
the xbox to my understanding utilizes unified memory thus the only memory you actually have access to would be the HD, so hardware based virtual memory is out the window.
really its that virtual memory was just thrown out the window at some point. likely to save cost in production.
facepalm,
In computing memory is RAM.
Hard disk or USB is storage.
Virtual Memory is the technique of moving less needed pages of data(from RAM) to a swap file on Storage (hard drive) in order to free up space for more actively running data. This was handled at the operating system level (because the CPU and RAM don't know what disk is, that's what the Operating System handles). The purpose was to present a uniform view of memory to an application, such that an application did not know if it was being stuffed into Virtual Memory or if the PC was actually out of RAM.
Also note, that pages in the swap file were inactive. If an active program needed to address something in a Page that was put in the swap file, the OS had to move it back to active memory (swapping something else out).
This basically took longer, and is one of the reasons why your hard drive light flashes and why Windows has been known as 'slow'
Virtual Memory was a useful technique in situations where RAM was expensive (not true anymore), apps were bloated, and usage was such that with one app being actively used at a time, nobody would notice the memory swaps as a user changed context of their activities.
Virtual Memory has long been the bane of video gaming. In the Windows era, the practice of buying a metric crapton of RAM for your PC and turning off the swap file was one of the keys to improving game performance.
The next was RAM Disk, where instead of using hard drive as more memory, memory was used as a small, virtual hard drive because RAM access is always faster than disk access.
The modern static disk drives of today are somewhat of a amalgamation of the RAM disk concept, in that SDD is faster than HDD, but pricier (you don't see many 1TB SDD drives). So using an 128GB SDD for your OS to boot off of and some apps is a good compromise, with user data stored on the larger, cheaper traditional HDD.
According to lore, the DEC VAX if I recall was the first to implement Virtual Memory. Microsoft hired the principal architect of it and suddenly NT had virtual memory. So magical was transformation this that allegedly there was code in the NT code-base that still had the DEC copyright notices on it. Apparently DEC and MS settled.
In MC, the concept of loading chunks that are near the player (and thus will be seen soon) and unloading chunks that the player is now farther away from is akin to Virtual Memory, but under traditional terms, isn't actually Virtual Memory, it's just memory management by the game. Games always need to tightly manage memory and dispose of objects they aren't using at non-impactful times (as garbage collecting is one of the big performance hits).
LOL, I just had a CS 101 (Intro to Computer Science) flashback from my early college days almost 25 years ago...
Yeah. we got a mix of people in here and all of us don't know everything, some of us don't know enough to be talking correctly about the subject of memory management on a basic level.
design a memory management system was an interview question I got at Microsoft.
Yeah. we got a mix of people in here and all of us don't know everything, some of us don't know enough to be talking correctly about the subject of memory management on a basic level.
design a memory management system was an interview question I got at Microsoft.
Yep. That's why your previous post (2:20 PM yesterday) is essential. I don't have the patience anymore to explain it all that well. So thanks for that.
Virtual Memory is the technique of moving less needed pages of data(from RAM) to a swap file on Storage (hard drive) in order to free up space for more actively running data. This was handled at the operating system level (because the CPU and RAM don't know what disk is, that's what the Operating System handles). The purpose was to present a uniform view of memory to an application, such that an application did not know if it was being stuffed into Virtual Memory or if the PC was actually out of RAM.
Also note, that pages in the swap file were inactive. If an active program needed to address something in a Page that was put in the swap file, the OS had to move it back to active memory (swapping something else out).
This basically took longer, and is one of the reasons why your hard drive light flashes and why Windows has been known as 'slow'
Virtual Memory was a useful technique in situations where RAM was expensive (not true anymore), apps were bloated, and usage was such that with one app being actively used at a time, nobody would notice the memory swaps as a user changed context of their activities.
Virtual Memory has long been the bane of video gaming. In the Windows era, the practice of buying a metric crapton of RAM for your PC and turning off the swap file was one of the keys to improving game performance.
The next was RAM Disk, where instead of using hard drive as more memory, memory was used as a small, virtual hard drive because RAM access is always faster than disk access.
The modern static disk drives of today are somewhat of a amalgamation of the RAM disk concept, in that SDD is faster than HDD, but pricier (you don't see many 1TB SDD drives). So using an 128GB SDD for your OS to boot off of and some apps is a good compromise, with user data stored on the larger, cheaper traditional HDD.
According to lore, the DEC VAX if I recall was the first to implement Virtual Memory. Microsoft hired the principal architect of it and suddenly NT had virtual memory. So magical was transformation this that allegedly there was code in the NT code-base that still had the DEC copyright notices on it. Apparently DEC and MS settled.
In MC, the concept of loading chunks that are near the player (and thus will be seen soon) and unloading chunks that the player is now farther away from is akin to Virtual Memory, but under traditional terms, isn't actually Virtual Memory, it's just memory management by the game. Games always need to tightly manage memory and dispose of objects they aren't using at non-impactful times (as garbage collecting is one of the big performance hits).
oh look it's Sheldon, i know ram is memory, its in the name genius, "random access memory" we are talking specifically about having secondary memory/storage of a reliable source to be used for virtual memory, which is a form of memory management. a dur.
"moving pages around when not needed" you mean the system that i explained "on a basic level" a few posts back? with no condescending tones or high handed all-mightiness. interesting. *face palm*
you just took a far longer time to explain things i already have, sure i don't know all of the particulars, but i believe i explained the basic concept of virtual memory very well for those who aren't strictly educated in computer sciences, and without stroking my own ego at that.
in all the research ive done the MC system has been referred to as "virtual memory", so excuse me if i don't take your word for it. whether the game is handling it or not, sounds like the system is very similar, and by similar i mean near identical.
and correct me if im wrong, i know you will you cant resist, but wouldn't having minimal ram be an issue in a game where actually having virtual memory is almost a necessity, to bring it up to pc standards that is, which is what the op actually asked about not a lesson in CS. since as you state ram is the fastest form of memory. and the virtual system would slow it down the process while trying to re load and unload files, only having 512 mb of ram would essentially mean you would constantly be swapping files. in terms of the game say a 32x32 chunk area is loaded. while the rest would have to swap out at an almost constant rate as you moved, another 32 chunks keeping the area always 32x32, having a slightly larger source of ram and "storage" would keep the system running more smoothly, since it would have a greater buffer between unloading and loading files essential the way it stands the virtual memory system would not be able to keep up. this is the point i was trying to make, while drugged up and sick.
question: if ram is the only mediating factor, and virtual memory is pointless then why is it that the system requirements are 2 gb of ram and the xbone, which has 8 gb, cant handle the "infinite" worlds the pc has.
answer: because the virtual memory system is not present.
on paper the xbone could easily handle it. but it likely will not have it.
sorry if my sentence structure is failing im barely concious and my head is killing me,
question: if ram is the only mediating factor, and virtual memory is pointless then why is it that the system requirements are 2 gb of ram and the xbone, which has 8 gb, cant handle the "infinite" worlds the pc has.
answer: because the virtual memory system is not present.
on paper the xbone could easily handle it. but it likely will not have it.
sorry if my sentence structure is failing im barely concious and my head is killing me,
The problem with Virtual Memory in the proper sense and video games, is that the operating system, not the video game is in control of when Page Swapping happens. Much like why automatic Garbage Collection in .NET (and sometimes Java) are also the bane of game programming.
Games are very timing centric. You can only perform a certain number of tasks before you NEED to refresh the screen in order to maintain a desired frame rate. This is basically a loop. If on any iteration of the loop, one or more tasks take longer (or are inflicted upon the task by the operating system), then the time between frame draws is delayed and frame rate stutter occurs.
though the ideas of Virtual Memory may be incorporated in a game (swaping content in and out is not a new idea as nobody loads all the data for an entire system into RAM anyway), the specific term needs to be reserved for speaking about the Operating System's instance of it, so one can intelligibly understand what another person means when they describe a problem with memory management.
As such, Virtual Memory refers specifically to the practice of the Operating System of obfuscating the location of memory content to the running application's Application Space. To the app, it has 5 pages of memory used. Some for code, some for data. When the App tries to access page 4, it doesn't know where page 4 is located. When the OS sees a request for Page 4, it connects the requesting App to it, or fetches it back into RAM from the swap file if it was cached to disk. NT actually throws a Page Fault for not finding the memory, and a separate Page Fault Exception handler kicks in to repair the damage. Thus throwing another delay into getting that data back to the App that it thinks is already in memory.
Since you couldn't be bothered to use RAM and Storage separately or with a modicum of precision, it was not clear how much of the actualities was understood. Given how many other non-CS people seem to think they can throw their technical weight around, if they can't understand my basic explanation of memory managment, then perhaps now they are better informed.
Otherwise, people who use the word "memory" unclearly in a technical discussion, sound like those people in the youtube videos asking for the iPhone with the bigger GBs.
This may have been asked before and I'm sorry for asking a stupid question.
Why can't we just have a load screen when we hit the map bondaries? Loose the old map from the RAM, load a new map into the ram and link the two together maps together in the save file.
OK, there would be some issues:
Continuty of the biomes.
Linking Nethers
Multiple Strongholds etc.
It wouldnt be infinate as you would be restricted by how big you were prepared to have the save file but you could easly massively increase the size of your world.
M
edit. Forgot to mention as I always do on these posts. I'm perfectly happy with the world size..
This may have been asked before and I'm sorry for asking a stupid question.
Why can't we just have a load screen when we hit the map bondaries? Loose the old map from the RAM, load a new map into the ram and link the two together maps together in the save file.
OK, there would be some issues:
Continuty of the biomes.
Linking Nethers
Multiple Strongholds etc.
It wouldnt be infinate as you would be restricted by how big you were prepared to have the save file but you could easly massively increase the size of your world.
M
edit. Forgot to mention as I always do on these posts. I'm perfectly happy with the world size..
I'm not a programming guru, but I think that would preclude having different people in different sections/areas of the world at the same time. In other Xbox games I've played that use load screens, all the characters in the game are moved to the new map at the same time. In Minecraft, people in the same "world" would probably eventually want to spread out over more than one section of each map.
I'm not a programming guru, but I think that would preclude having different people in different sections/areas of the world at the same time. In other Xbox games I've played that use load screens, all the characters in the game are moved to the new map at the same time. In Minecraft, people in the same "world" would probably eventually want to spread out over more than one section of each map.
On split screen that sounds like it makes sense.
Not sure about mutiplayer though?
Each map would be loaded on the individuals xbox and not multiple maps loaded on one machine. Once player would be working on Map A while another player would be working on Map B. Both save files would need to be able to be written independently and force save if one player went from one map to the next.
On split screen that sounds like it makes sense.
Not sure about mutiplayer though?
Each map would be loaded on the individuals xbox and not multiple maps loaded on one machine. Once player would be working on Map A while another player would be working on Map B. Both save files would need to be able to be written independently.
Sounds complicated....
However, keep in mind that Live works a little differently than normal online servers. The host Xbox is not a full server, but is assigned some "server-like" functions and there are currently no "full-size" rental servers available for use by Minecraft players (although many people have been requesting them and 4J themselves said some time ago that they were looking into them). I think all this may mean that the world map is not fully downloaded from the host Xbox to the hard drives of the various guest Xboxes, but rather that smaller packets of information are continually sent back and forth from the host Xbox. That way, the changes made by all players are immediately evident to all players from the various angles they may be "viewing" the changes taking place and it is because the entire map is resident in the host Xbox's memory that enables all the various players to be in different parts of the map at the same time. I also think this is why every player on the map is immediately kicked if the host player leaves the map and why there are so many more lag issues with the multiplayer games being played online. I'm not sure that even the PC downloads worlds right to the various user's computers. I have a friend who plays PC Minecraft who monitors his internet and tells me that it is continually sending and receiving more data than any other game he plays online.
Also, we tend to freely banter around the 512 MB of RAM figure; but we don't really know how much of that RAM is taken up by Live itself. The amount of RAM available for use by the game may possibly be quite a bit less than 512 MB.
Again, my point is that it's not just one technical issue, but rather the combination of several technical issues involving what programming choices were made by ALL three partners (Mojang, 4J and Microsoft) over the years. For example, long before 4J came on the scene, Mojang made various choices about how the game was to be programmed in Java and Microsoft made programming choices involving how Live would handle online gameplay.
Since 4J are essentially a contract porter of the game rather than the creator/designer of the game and also not able to ignore whatever Microsoft demands relative to how the game functions through Live, I think they just tried to make the best they could of a bad situation. Yes, our worlds are small, but the game currently does function... and from what I can tell from various comments on the PC forums, it generally functions with fewer lag issues than occur on some PCs (all of which should have at least quadruple the RAM available as on the Xbox - i.e. the 2 GB minimum system requirement posts on Mojang's website).
As Geneo and I have both been saying, a rewrite of the game from scratch at this point in order to accommodate larger worlds on the Xbox 360 is just not in the cards; but larger worlds will be available on the Xbox One and PS4. Furthermore, I LIKE the small worlds the Xbox has since I can gear individual worlds to individual themes and have a hope of eventually "finishing" a world based on a singular theme. I also feel freer to start a fresh world each time I get an idea for a fresh theme. IMO, people who really feel that they need seemingly "infinite" worlds should really stop wasting their breath complaining here and just break down and buy the game for the PC and play it there.
As an addition to what UpUp said, I've been talking to the devs of an Indie block game, and the story is even more woeful.
For Indie games at least, once the XNA and all the system stuff in memory, there's only about 256MB to play with. They have "infinite" worlds in their game, with a lot of chunk swapping going on, but they're really challenged by that memory limit.
Arcade games get things a little nicer, not carrying baggage of XNA, and being in native C++, but still it's no picnic. It is a tight fit.
Now with the "other" game with infinite worlds, the host xbox has to transmit chunks to the other players, and as those players do stuff, the changes are not always correctly sent back to the host xbox. So if the host quits, changes get lost sometimes.
Basically, it's a new and different complicated problem to get infinite worlds and multiplayer in the tight confines of the 256 active memory space they get.
MC360 seems to ease out of that by apparently loading all/most active chunks at once, but by keeping a size limit, it ensures a maximum number of chunks that might need to be loaded.
as a non-video game programmer, the initial problem of "keeping the data correct on all the xboxes" isn't supposed to be that hard. There's methodologies to it and it's been done in zillions of games. But voxel games have LOTS of data that can change, and in video games, timing and performance are critical, so very creative solutions are needed to optimize the speed of dealing with that data, and taking shortcuts when possible.
I'm inclined to suspect that MC360 has done all the clever stuff it can. Odds are good, they came to the World Size choice by trial and error, not just arbitrarily picking the smallest number they thought they could get away with.
Rollback Post to RevisionRollBack
To post a comment, please login or register a new account.
i see what our saying, memory is definitely an issue and not having a reliable memory for virtual memory is definitely an issue, im jsut saying ram is also a determining factor since how fast it has to unload the ram into the virtual memory is reliant on how much ram you have. when data can only unload and upload so fast then the amount of ram can be a huge limit.
the xbox to my understanding utilizes unified memory thus the only memory you actually have access to would be the HD, so hardware based virtual memory is out the window.
really its that virtual memory was just thrown out the window at some point. likely to save cost in production.
-
View User Profile
-
View Posts
-
Send Message
Retired StaffIt's not small because 4J wants it to be that way.
It's not small because 4J can't do obvious things to make it large/infinite.
It's small because it can't be large, given the hardware, software, and platform-owner-imposed limitations of the 360.
The golden age: it's not the game, it's you ⋆ Why Minecraft should not be harder ⋆ Spelling hints
facepalm,
In computing memory is RAM.
Hard disk or USB is storage.
Virtual Memory is the technique of moving less needed pages of data(from RAM) to a swap file on Storage (hard drive) in order to free up space for more actively running data. This was handled at the operating system level (because the CPU and RAM don't know what disk is, that's what the Operating System handles). The purpose was to present a uniform view of memory to an application, such that an application did not know if it was being stuffed into Virtual Memory or if the PC was actually out of RAM.
Also note, that pages in the swap file were inactive. If an active program needed to address something in a Page that was put in the swap file, the OS had to move it back to active memory (swapping something else out).
This basically took longer, and is one of the reasons why your hard drive light flashes and why Windows has been known as 'slow'
Virtual Memory was a useful technique in situations where RAM was expensive (not true anymore), apps were bloated, and usage was such that with one app being actively used at a time, nobody would notice the memory swaps as a user changed context of their activities.
Virtual Memory has long been the bane of video gaming. In the Windows era, the practice of buying a metric crapton of RAM for your PC and turning off the swap file was one of the keys to improving game performance.
The next was RAM Disk, where instead of using hard drive as more memory, memory was used as a small, virtual hard drive because RAM access is always faster than disk access.
The modern static disk drives of today are somewhat of a amalgamation of the RAM disk concept, in that SDD is faster than HDD, but pricier (you don't see many 1TB SDD drives). So using an 128GB SDD for your OS to boot off of and some apps is a good compromise, with user data stored on the larger, cheaper traditional HDD.
According to lore, the DEC VAX if I recall was the first to implement Virtual Memory. Microsoft hired the principal architect of it and suddenly NT had virtual memory. So magical was transformation this that allegedly there was code in the NT code-base that still had the DEC copyright notices on it. Apparently DEC and MS settled.
In MC, the concept of loading chunks that are near the player (and thus will be seen soon) and unloading chunks that the player is now farther away from is akin to Virtual Memory, but under traditional terms, isn't actually Virtual Memory, it's just memory management by the game. Games always need to tightly manage memory and dispose of objects they aren't using at non-impactful times (as garbage collecting is one of the big performance hits).
LOL, I just had a CS 101 (Intro to Computer Science) flashback from my early college days almost 25 years ago...
Yeah. we got a mix of people in here and all of us don't know everything, some of us don't know enough to be talking correctly about the subject of memory management on a basic level.
design a memory management system was an interview question I got at Microsoft.
Yep. That's why your previous post (2:20 PM yesterday) is essential. I don't have the patience anymore to explain it all that well. So thanks for that.
oh look it's Sheldon, i know ram is memory, its in the name genius, "random access memory" we are talking specifically about having secondary memory/storage of a reliable source to be used for virtual memory, which is a form of memory management. a dur.
"moving pages around when not needed" you mean the system that i explained "on a basic level" a few posts back? with no condescending tones or high handed all-mightiness. interesting. *face palm*
you just took a far longer time to explain things i already have, sure i don't know all of the particulars, but i believe i explained the basic concept of virtual memory very well for those who aren't strictly educated in computer sciences, and without stroking my own ego at that.
in all the research ive done the MC system has been referred to as "virtual memory", so excuse me if i don't take your word for it. whether the game is handling it or not, sounds like the system is very similar, and by similar i mean near identical.
and correct me if im wrong, i know you will you cant resist, but wouldn't having minimal ram be an issue in a game where actually having virtual memory is almost a necessity, to bring it up to pc standards that is, which is what the op actually asked about not a lesson in CS. since as you state ram is the fastest form of memory. and the virtual system would slow it down the process while trying to re load and unload files, only having 512 mb of ram would essentially mean you would constantly be swapping files. in terms of the game say a 32x32 chunk area is loaded. while the rest would have to swap out at an almost constant rate as you moved, another 32 chunks keeping the area always 32x32, having a slightly larger source of ram and "storage" would keep the system running more smoothly, since it would have a greater buffer between unloading and loading files essential the way it stands the virtual memory system would not be able to keep up. this is the point i was trying to make, while drugged up and sick.
question: if ram is the only mediating factor, and virtual memory is pointless then why is it that the system requirements are 2 gb of ram and the xbone, which has 8 gb, cant handle the "infinite" worlds the pc has.
answer: because the virtual memory system is not present.
on paper the xbone could easily handle it. but it likely will not have it.
sorry if my sentence structure is failing im barely concious and my head is killing me,
The problem with Virtual Memory in the proper sense and video games, is that the operating system, not the video game is in control of when Page Swapping happens. Much like why automatic Garbage Collection in .NET (and sometimes Java) are also the bane of game programming.
Games are very timing centric. You can only perform a certain number of tasks before you NEED to refresh the screen in order to maintain a desired frame rate. This is basically a loop. If on any iteration of the loop, one or more tasks take longer (or are inflicted upon the task by the operating system), then the time between frame draws is delayed and frame rate stutter occurs.
though the ideas of Virtual Memory may be incorporated in a game (swaping content in and out is not a new idea as nobody loads all the data for an entire system into RAM anyway), the specific term needs to be reserved for speaking about the Operating System's instance of it, so one can intelligibly understand what another person means when they describe a problem with memory management.
As such, Virtual Memory refers specifically to the practice of the Operating System of obfuscating the location of memory content to the running application's Application Space. To the app, it has 5 pages of memory used. Some for code, some for data. When the App tries to access page 4, it doesn't know where page 4 is located. When the OS sees a request for Page 4, it connects the requesting App to it, or fetches it back into RAM from the swap file if it was cached to disk. NT actually throws a Page Fault for not finding the memory, and a separate Page Fault Exception handler kicks in to repair the damage. Thus throwing another delay into getting that data back to the App that it thinks is already in memory.
Since you couldn't be bothered to use RAM and Storage separately or with a modicum of precision, it was not clear how much of the actualities was understood. Given how many other non-CS people seem to think they can throw their technical weight around, if they can't understand my basic explanation of memory managment, then perhaps now they are better informed.
Otherwise, people who use the word "memory" unclearly in a technical discussion, sound like those people in the youtube videos asking for the iPhone with the bigger GBs.
Why can't we just have a load screen when we hit the map bondaries? Loose the old map from the RAM, load a new map into the ram and link the two together maps together in the save file.
OK, there would be some issues:
Continuty of the biomes.
Linking Nethers
Multiple Strongholds etc.
It wouldnt be infinate as you would be restricted by how big you were prepared to have the save file but you could easly massively increase the size of your world.
M
edit. Forgot to mention as I always do on these posts. I'm perfectly happy with the world size..
I'm not a programming guru, but I think that would preclude having different people in different sections/areas of the world at the same time. In other Xbox games I've played that use load screens, all the characters in the game are moved to the new map at the same time. In Minecraft, people in the same "world" would probably eventually want to spread out over more than one section of each map.
If a bigger world is that important.... get a Xbox One.
(Remember: even that version won't have 'infinite" worlds. The 360 just won't, period.)
On split screen that sounds like it makes sense.
Not sure about mutiplayer though?
Each map would be loaded on the individuals xbox and not multiple maps loaded on one machine. Once player would be working on Map A while another player would be working on Map B. Both save files would need to be able to be written independently and force save if one player went from one map to the next.
Sounds complicated....
However, keep in mind that Live works a little differently than normal online servers. The host Xbox is not a full server, but is assigned some "server-like" functions and there are currently no "full-size" rental servers available for use by Minecraft players (although many people have been requesting them and 4J themselves said some time ago that they were looking into them). I think all this may mean that the world map is not fully downloaded from the host Xbox to the hard drives of the various guest Xboxes, but rather that smaller packets of information are continually sent back and forth from the host Xbox. That way, the changes made by all players are immediately evident to all players from the various angles they may be "viewing" the changes taking place and it is because the entire map is resident in the host Xbox's memory that enables all the various players to be in different parts of the map at the same time. I also think this is why every player on the map is immediately kicked if the host player leaves the map and why there are so many more lag issues with the multiplayer games being played online. I'm not sure that even the PC downloads worlds right to the various user's computers. I have a friend who plays PC Minecraft who monitors his internet and tells me that it is continually sending and receiving more data than any other game he plays online.
Also, we tend to freely banter around the 512 MB of RAM figure; but we don't really know how much of that RAM is taken up by Live itself. The amount of RAM available for use by the game may possibly be quite a bit less than 512 MB.
Again, my point is that it's not just one technical issue, but rather the combination of several technical issues involving what programming choices were made by ALL three partners (Mojang, 4J and Microsoft) over the years. For example, long before 4J came on the scene, Mojang made various choices about how the game was to be programmed in Java and Microsoft made programming choices involving how Live would handle online gameplay.
Since 4J are essentially a contract porter of the game rather than the creator/designer of the game and also not able to ignore whatever Microsoft demands relative to how the game functions through Live, I think they just tried to make the best they could of a bad situation. Yes, our worlds are small, but the game currently does function... and from what I can tell from various comments on the PC forums, it generally functions with fewer lag issues than occur on some PCs (all of which should have at least quadruple the RAM available as on the Xbox - i.e. the 2 GB minimum system requirement posts on Mojang's website).
As Geneo and I have both been saying, a rewrite of the game from scratch at this point in order to accommodate larger worlds on the Xbox 360 is just not in the cards; but larger worlds will be available on the Xbox One and PS4. Furthermore, I LIKE the small worlds the Xbox has since I can gear individual worlds to individual themes and have a hope of eventually "finishing" a world based on a singular theme. I also feel freer to start a fresh world each time I get an idea for a fresh theme. IMO, people who really feel that they need seemingly "infinite" worlds should really stop wasting their breath complaining here and just break down and buy the game for the PC and play it there.
I havent got anywhere near utilising all of my current world.
For Indie games at least, once the XNA and all the system stuff in memory, there's only about 256MB to play with. They have "infinite" worlds in their game, with a lot of chunk swapping going on, but they're really challenged by that memory limit.
Arcade games get things a little nicer, not carrying baggage of XNA, and being in native C++, but still it's no picnic. It is a tight fit.
Now with the "other" game with infinite worlds, the host xbox has to transmit chunks to the other players, and as those players do stuff, the changes are not always correctly sent back to the host xbox. So if the host quits, changes get lost sometimes.
Basically, it's a new and different complicated problem to get infinite worlds and multiplayer in the tight confines of the 256 active memory space they get.
MC360 seems to ease out of that by apparently loading all/most active chunks at once, but by keeping a size limit, it ensures a maximum number of chunks that might need to be loaded.
as a non-video game programmer, the initial problem of "keeping the data correct on all the xboxes" isn't supposed to be that hard. There's methodologies to it and it's been done in zillions of games. But voxel games have LOTS of data that can change, and in video games, timing and performance are critical, so very creative solutions are needed to optimize the speed of dealing with that data, and taking shortcuts when possible.
I'm inclined to suspect that MC360 has done all the clever stuff it can. Odds are good, they came to the World Size choice by trial and error, not just arbitrarily picking the smallest number they thought they could get away with.