The Meaning of Life, the Universe, and Everything.
Join Date:
10/20/2014
Posts:
46
Member Details
lets be honest.. Multiplayer outside of split screen pretty much sucks. I have good wifi and on most servers it looks like everyone is teleporting around like enderman. The "servers" for xbox360 are so sad... It's such a sloppy multiplayer system. So why even bother adding a multiplayer to a game that can only handle single player worlds?
What's even worse is the newer consoles are not much better. Same deal more than 3 or 4 people will lag the game. And I have been on a world more than once where myself and 1 other were in the nether and 2 people at the spawn area and the game starts super teleport lagging and then freeze/crash. The only thing you can do is turn it off.
It's enjoyable on a small scale...then everything goes terrible when the Fire Na--more than two people are in the world.
I keep my map series to the minimum requirement (either two or three) of players with few exceptions and keep all animations such as water flow and Redstone particles to a minimum, which was a lot of caving and lighting in the second map....
It helps to some degree, but the lag can still become obnoxious, even with only someone and me in the world.
The Meaning of Life, the Universe, and Everything.
Join Date:
5/15/2014
Posts:
351
Member Details
You know, I don't understand why so many players have such lag issues when they play multiplayer in minecraft. Can sommebody explain why there are so many issues? I havent played multiplayer for a little while, but I used to constantly host games with up to six players in them at a time and we would never lag or have the issues many players describe having.
because the people whining have crappy internet (or their friends do).
To play this game online, you need 1st World Internet. Not "barely better than Dial-Up"
go to testmy.net and find out your speeds. If one of your players has low numbers, he's weak link.
This./\
But not always. Sometimes people have very good infrastructure but their network settings are Back-asswards and don't utilize the speed. I used to play with a buddy of mine - just him and me on a world - and I would get the worse lag than when I played with a group of 5 or 6.
Actually our team has done extensive testing on this, and our finding may just boggle your mind.
Now we all know that the console version loads and keeps resident chunks, meaning that once a chunk is loaded it remains loaded until either the console reboots or the host restarts the map by whatever means.
Our findings show that if the users remain close together and not load alot of chunks the game behave beautify with next to no lag, (even with heavy redstone contraptions running in close proximity to the users. As more and more chunks become loaded the game starts to stutter for users whom are connected (ie not the host).
What we think is happening is that as chunks are loaded they are getting sent to all users "regardless" to there location. What I mean is that if we have 3 users connected, 1 of which is the host, if the host remains put, and user 1 heads right, while user 2 heads left, as user 1 loads chunks those chunks are loaded first by the host which are then being sent to both user 1 and user 2, and the same is happening for user 2, this would account for a CRAPload of both load and send time, and hence why we are experiencing lag. If user 3 connects, ALL loaded chunks are streamed to this user, beginning with the chunks around his spawn location.
We base our findings based on data streams that were sent to clients, in our case we set it so that user 2 remained still while user 1 headed left to load chunks, the data streams for both the user 1 and 2 remained pretty much the same, where the host had almost 2x more 'send' then the other 2 clients. When a new user entered, and all users remained 'still' the new users data stream was almost equal to the entire data sent to either user 1 or 2.
This was tested quite awhile back, and may have changed but I doubt it. It may change in the future. Thus the issue is that clients are receiving loaded chunks regardless to their location in the world. It may be that it is done this way for a reason. Hopefully in due time 4J addresses the extreme BW that MC uses.
might be some college kids have >1.5Mbit upload as their dorm and campus will likely run on proper internet backbones with equal upload and download rates, but regular DSL seems limited to that. Comcast has got some good plans with faster upload.
I don't know if Live has a limit per se, but it's my understanding that onece the host is selected, the bulk of the traffic is going from host to other players, not host to Live to players. I haven't done any traffic monitoring to confirm that...
In most cases though, look at everyone's speeds. make the host be the guy with the best Upload, as the Host has to send out the most data.
Cire360's info seems plausible, so sticking nearby is a good idea to avoid unneeded chunks that have to get sent to everybody.
Simple... Because it probably would not have sold nearly as many copies without it as it did with it, even though it doesn't work perfectly. It's not ideal, but as others have stated, there are ways to play in order to minimize the lag (e.g. by keeping the group close together on the map).
Well there are steps that 4J could take to 'help' the issue as well. "IF" I am correct on the internal workings, then a few slight changes to the code could cut down considerably the amount of BW being sent. Biggest being that adding a simple flag (which I might add already exists to chunks) that indicate if its 'dirty', a second bit here could indicate that its simply 'not loaded' yet. This way chunks don't need to be sent to users whom are no where near the area, instead the client would first generate chunks based on the 'seed' which is sent to the user upon connecting, but these chunks within render distance are 'requested' from the host and set as 'unloaded', as the offsets to the chunks arrive its set as loaded, as the user moves say north, the new chunks that need to be rendered (which is a much smaller number something like 6 to 8) are generated using the seed and requested from the host, they are then set to loaded.
The host then only needs to send a 'dirty' change state to chunks to users whom are connected, if those chunks are loaded already, they can be set as dirty. Should those chunks be within render distance they would be 'reloaded', otherwise they wouldn't be loaded until the user comes within render distance.
To sum it up the clients would only load chunks that they need, and not get dirty chunks unless they are again needed, where needed means that they are within render distance.
Well there are steps that 4J could take to 'help' the issue as well. "IF" I am correct on the internal workings, then a few slight changes to the code could cut down considerably the amount of BW being sent. Biggest being that adding a simple flag (which I might add already exists to chunks) that indicate if its 'dirty', a second bit here could indicate that its simply 'not loaded' yet. This way chunks don't need to be sent to users whom are no where near the area, instead the client would first generate chunks based on the 'seed' which is sent to the user upon connecting, but these chunks within render distance are 'requested' from the host and set as 'unloaded', as the offsets to the chunks arrive its set as loaded, as the user moves say north, the new chunks that need to be rendered (which is a much smaller number something like 6 to 8) are generated using the seed and requested from the host, they are then set to loaded.
The host then only needs to send a 'dirty' change state to chunks to users whom are connected, if those chunks are loaded already, they can be set as dirty. Should those chunks be within render distance they would be 'reloaded', otherwise they wouldn't be loaded until the user comes within render distance.
To sum it up the clients would only load chunks that they need, and not get dirty chunks unless they are again needed, where needed means that they are within render distance.
I wasn't implying that there wasn't anything 4J could do if they're so inclined. The players, however, have already complained about it (so 4J does know about the problem), and still no one can "force" 4J to do anything about it... so, might as well work around it as best one can. (shrug)
By the way, is that Bandit Country thing you posted about still going or are there a bunch of hopeless hopefuls posting there every day, unneedingly bumping a dead world?
(If it is the latter then you may want to ask for the topic to be locked.)
The Meaning of Life, the Universe, and Everything.
Join Date:
10/29/2014
Posts:
66
Xbox:
Wolf on Xbox
Member Details
If I'm right, What happens when a friend wants to try Minecraft Xbox 360 but he does not have an Xbox 360 or his parents does not want to buy it? So then you'll you will invite that friend over, before he comes you get your extra controller and HD cable out of the attic, you then make an extra account for him and you then put the HD cable in the slot, turn on HDMI 1 or 2. Then when he comes, you can play minecraft Xbox 360 with out losing his or her parents money!
Well there are steps that 4J could take to 'help' the issue as well. "IF" I am correct on the internal workings, then a few slight changes to the code could cut down considerably the amount of BW being sent. Biggest being that adding a simple flag (which I might add already exists to chunks) that indicate if its 'dirty', a second bit here could indicate that its simply 'not loaded' yet. This way chunks don't need to be sent to users whom are no where near the area, instead the client would first generate chunks based on the 'seed' which is sent to the user upon connecting, but these chunks within render distance are 'requested' from the host and set as 'unloaded', as the offsets to the chunks arrive its set as loaded, as the user moves say north, the new chunks that need to be rendered (which is a much smaller number something like 6 to 8) are generated using the seed and requested from the host, they are then set to loaded.
The host then only needs to send a 'dirty' change state to chunks to users whom are connected, if those chunks are loaded already, they can be set as dirty. Should those chunks be within render distance they would be 'reloaded', otherwise they wouldn't be loaded until the user comes within render distance.
To sum it up the clients would only load chunks that they need, and not get dirty chunks unless they are again needed, where needed means that they are within render distance.
I think you are right about how the data is being processed, but to add to this, if the game structure moves more away from a client/server interface and moved more toward a peer to peer design architecture, then each players' consoles could share the burden of the load of running the game when the players drift off into different parts of the overworld or even into other realms. The critical thing that would need to be tracked for save game purposes is periodic block update/changes made on non-host consoles, player stats, coordinates, and inventories passed back to the host player's console in order to reconstruct data in the event of a player disconnect from the game environment for save game/player rejoin purposes.
Some data might be lost between updates under this design, but the overall lag would decrease tremendously as each console would only have to keep track of their immediate corner of the world and pretty much not have to worry about any players that were outside their zone until they moved within range of each other or just have to apply the tabulated block updates into areas that had been impacted by other players in their wake.
Rollback Post to RevisionRollBack
To post a comment, please login or register a new account.
I keep my map series to the minimum requirement (either two or three) of players with few exceptions and keep all animations such as water flow and Redstone particles to a minimum, which was a lot of caving and lighting in the second map....
It helps to some degree, but the lag can still become obnoxious, even with only someone and me in the world.
Stay fluffy~
To play this game online, you need 1st World Internet. Not "barely better than Dial-Up"
go to testmy.net and find out your speeds. If one of your players has low numbers, he's weak link.
This./\
But not always. Sometimes people have very good infrastructure but their network settings are Back-asswards and don't utilize the speed. I used to play with a buddy of mine - just him and me on a world - and I would get the worse lag than when I played with a group of 5 or 6.
Now we all know that the console version loads and keeps resident chunks, meaning that once a chunk is loaded it remains loaded until either the console reboots or the host restarts the map by whatever means.
Our findings show that if the users remain close together and not load alot of chunks the game behave beautify with next to no lag, (even with heavy redstone contraptions running in close proximity to the users. As more and more chunks become loaded the game starts to stutter for users whom are connected (ie not the host).
What we think is happening is that as chunks are loaded they are getting sent to all users "regardless" to there location. What I mean is that if we have 3 users connected, 1 of which is the host, if the host remains put, and user 1 heads right, while user 2 heads left, as user 1 loads chunks those chunks are loaded first by the host which are then being sent to both user 1 and user 2, and the same is happening for user 2, this would account for a CRAPload of both load and send time, and hence why we are experiencing lag. If user 3 connects, ALL loaded chunks are streamed to this user, beginning with the chunks around his spawn location.
We base our findings based on data streams that were sent to clients, in our case we set it so that user 2 remained still while user 1 headed left to load chunks, the data streams for both the user 1 and 2 remained pretty much the same, where the host had almost 2x more 'send' then the other 2 clients. When a new user entered, and all users remained 'still' the new users data stream was almost equal to the entire data sent to either user 1 or 2.
This was tested quite awhile back, and may have changed but I doubt it. It may change in the future. Thus the issue is that clients are receiving loaded chunks regardless to their location in the world. It may be that it is done this way for a reason. Hopefully in due time 4J addresses the extreme BW that MC uses.
I don't know if Live has a limit per se, but it's my understanding that onece the host is selected, the bulk of the traffic is going from host to other players, not host to Live to players. I haven't done any traffic monitoring to confirm that...
In most cases though, look at everyone's speeds. make the host be the guy with the best Upload, as the Host has to send out the most data.
Cire360's info seems plausible, so sticking nearby is a good idea to avoid unneeded chunks that have to get sent to everybody.
The host then only needs to send a 'dirty' change state to chunks to users whom are connected, if those chunks are loaded already, they can be set as dirty. Should those chunks be within render distance they would be 'reloaded', otherwise they wouldn't be loaded until the user comes within render distance.
To sum it up the clients would only load chunks that they need, and not get dirty chunks unless they are again needed, where needed means that they are within render distance.
I wasn't implying that there wasn't anything 4J could do if they're so inclined. The players, however, have already complained about it (so 4J does know about the problem), and still no one can "force" 4J to do anything about it... so, might as well work around it as best one can. (shrug)
(If it is the latter then you may want to ask for the topic to be locked.)
Stay fluffy~
― Ilya Pasternak (Strigon 1 new)
I think you are right about how the data is being processed, but to add to this, if the game structure moves more away from a client/server interface and moved more toward a peer to peer design architecture, then each players' consoles could share the burden of the load of running the game when the players drift off into different parts of the overworld or even into other realms. The critical thing that would need to be tracked for save game purposes is periodic block update/changes made on non-host consoles, player stats, coordinates, and inventories passed back to the host player's console in order to reconstruct data in the event of a player disconnect from the game environment for save game/player rejoin purposes.
Some data might be lost between updates under this design, but the overall lag would decrease tremendously as each console would only have to keep track of their immediate corner of the world and pretty much not have to worry about any players that were outside their zone until they moved within range of each other or just have to apply the tabulated block updates into areas that had been impacted by other players in their wake.