The reasoning seems to be based on an opinion that the game isn't, never was, and seemingly never will be, what someone thinks it should be.
You've got to admit though some of the updates we've been getting have been lackluster, and mods in Java edition do demonstrate that more performance could be squeezed out of the current vanilla game before it would become fair to tell people getting lag that they just don't have a good PC.
As been mentioned before since PC's do not have unlimited resources Mojang need to look at areas of the game which could be done more efficiently.
Why are chests, blocks which remain mostly static or inanimate until used, such a huge burden? and why can't some of the rendering of them be offloaded to the GPU or other CPU cores? they shouldn't be causing fps drops anywhere near as severe as they do, the fact that they do cause people problems is reason enough to expect Mojang to take a look into it to see what could be done.
I'd rather the existing issues with the game be patched first before they start piling on more content in the game,
it seems the more updates we get the more bloated the game becomes, 1.18 was a clear example, although to be fair,
there is more terrain generating in Caves and Cliffs which did make the increased hardware demands logical.
Why are chests, blocks which remain mostly static or inanimate until used, such a huge burden? and why can't some of the rendering of them be offloaded to the GPU or other CPU cores? they shouldn't be causing fps drops anywhere near as severe as they do, the fact that they do cause people problems is reason enough to expect Mojang to take a look into it to see what could be done.
Except, do you really need thousands of chests? This is a cube of 4096 chests - even my first world, with millions of resources accumulated over half a year of playtime, most of it spent caving, only has 625, which is capable of storing more than a million individual items and nearly 10 million resources stored as blocks of 9 each (around 1/10 of chests are used for unstackable items like enchanted books, music discs, diamond horse armor, and potions dropped by witches. Many of the rest are also empty as I create corridors of 16 double chests/32 chests at a time); if you really need more you must be using automated farms to collect resources in quantities that you could never hope to use (the same even goes for myself, with all manually collected resources, as evidenced by the need for such large-scale storage of millions of items, over 2 million coal and 3/4 of a million iron and even over half a double chest of diamond blocks):
Tile entities are outlined in yellow; the bar at the top is the distance from one end to the other (it looks longer due to the perspective, the underground storage area is at y=58), which is more than the 64 block render distance of tile entities so not every one will be visible unless you are looking down from high in the air, mitigating the impact of rendering them (the entire area is a single layer, as opposed to a 3D structure like the cube of chests):
If they are that much worse in newer versions then that's just another sign of how bad their coding is, especially since 1.17 completely eradicated the use of ancient OpenGL rendering that was already obsolete the day Notch started making the game! - and is responsible for so many graphical issues on older versions as driver makers don't bother trying to test them - if you have an Intel GPU you can't even play 1.6.4 without massive graphical glitching unless you downgrade to a driver from years ago (which will soon, if not already, no longer be feasible since other software needs newer drivers - all "fixed-function" OpenGL rendering is actually being emulated since there is no hardware support of any kind - and as you know emulation is slower - but then why is the newer rendering system slower? (or, why have people made mods for Java, which runs in a virtual machine, that outperform Bedrock, written in native C++? Because the code you write is a major factor that affects performance*):
Minimum Requirements:
GPU : Intel GMA 950 or AMD Equivalent with OpenGL 1.2 Support
Deprecated features include:
All fixed-function vertex and fragment processing
Direct-mode rendering, using glBegin and glEnd
Display lists
Indexed-color rendering targets
OpenGL Shading Language versions 1.10 and 1.20
(it is important to note that "shader-based rendering" has nothing to do with "shaders" mods like SEUS shaders, which are well-known to have a major impact on performance because of their enhanced visual effects - in this context a "shader" is a program the GPU executes to perform some task, much like how CPUs work - the old "fixed function" rendering was made when everything was hardwired into the GPU, and while hardware can be faster it is also much more limited, and such functions are emulated with shaders on any GPU from the past 20 years or more so in the end it is all the same)
*Case in point, vanilla 1.6.4 gets only half the FPS because it renders chests less efficiently - even Optifine doesn't make this optimization (and is in fact basically useless outside of reducing the impact of chunk updates, which still had a significant impact on my old computer but I never saw evidence of its "doubles FPS" claim. Nor does it do anything with memory usage, vanilla/Optifine use 3x more memory on an otherwise empty Superflat world):
You've got to admit though some of the updates we've been getting have been lackluster, and mods in Java edition do demonstrate that more performance could be squeezed out of the current vanilla game before it would become fair to tell people getting lag that they just don't have a good PC.
Right, and I agree with that.
But the game having issues isn't exclusive to Mojang, or to Minecraft, or to PC games in 2023. I saw this with Minecraft in 2012 when I first started playing, so it's not at all exclusive to today. I've seen this with countless other PC games over the span of decades, so it's not at all exclusive to Minecraft nor Mojang. If Minecraft and Mojang was the outlier and the rest of PC gaming throughout the years never had issues like this, I'd agree, but that's not the case.
And I'm sure this has been said before, with the reasons why, but you can't necessarily point to a landscape of mods (both content and technical/performance fixes) like it's somehow supposed to suggest Mojang could do better, because it doesn't work that way. Of course, Mojang COULD do better. But that's not mutually exclusive with the fact that expecting things to match what "could be" based on what the landscape of mods shows just isn't how things work in our world. Our world and system encourages this, in other worlds. "Good enough" rather than "most efficient" is often the target. I'm not saying that's how it should be in an ideal world; just saying how it is in ours.
Again, I've said it before, and in this very thread I think, but I absolutely do think Mojang could do better in performance. But I also said Minecraft by its nature is a CPU heavy game, and what's common is people trying to play a CPU heavy game on devices with less CPU power than typical gaming PCs have, which typically need less CPU power and more GPU power. So seeing this game have a reputation for being taxing is not at all surprising to me. And yes, I'm well aware devices that aren't necessarily lower powered also have issues. Again, neither here nor there. We don't have infinite hardware performance on tap so it's common thinking that "high end hardware" shouldn't have issues and it irritates me when I see people complain and use the excuse that they should always get perfect performance have no issues because they have "high end hardware", when that's an arbitrary thing.
But anyway, the subject of performance is very much a separate subject from what this thread is suggesting, which is more or less along the lines that Minecraft should just stop updating because it's not trying to change into whatever the thread starter's idea is of what it should become. And I highly disagree with that idea. And ironically, even if the thread starter did mention performance issues as one of the issues with the game (which it is!), their big thing was that triple A titles "do things better", yet newer triple A titles are having MAJOR issues with technical and performance matters on the PC side lately. The newest consoles have led to a big jump in hardware requirements. Shader compilation/stutter is and has been an increasing thing on PCs but not consoles (due to them being a fixed set of hardware). And so on.
As for the content, I don't find it lackluster. Relatively speaking, 1.13+ has been much improved compared to updates before that. Now maybe Minecraft isn't exactly the game I'd personally want it to be, but that's another subject altogether, and I definitely don't think content updates, which have only improved the game for me, should stop just because my own personal idea of what game should be might be different.
Except, do you really need thousands of chests? This is a cube of 4096 chests - even my first world, with millions of resources accumulated over half a year of playtime, most of it spent caving, only has 625, which is capable of storing more than a million individual items and nearly 10 million resources stored as blocks of 9 each (around 1/10 of chests are used for unstackable items like enchanted books, music discs, diamond horse armor, and potions dropped by witches. Many of the rest are also empty as I create corridors of 16 double chests/32 chests at a time); if you really need more you must be using automated farms to collect resources in quantities that you could never hope to use (the same even goes for myself, with all manually collected resources, as evidenced by the need for such large-scale storage of millions of items, over 2 million coal and 3/4 of a million iron and even over half a double chest of diamond blocks):
Tile entities are outlined in yellow; the bar at the top is the distance from one end to the other (it looks longer due to the perspective, the underground storage area is at y=58), which is more than the 64 block render distance of tile entities so not every one will be visible unless you are looking down from high in the air, mitigating the impact of rendering them (the entire area is a single layer, as opposed to a 3D structure like the cube of chests):
If they are that much worse in newer versions then that's just another sign of how bad their coding is, especially since 1.17 completely eradicated the use of ancient OpenGL rendering that was already obsolete the day Notch started making the game! - and is responsible for so many graphical issues on older versions as driver makers don't bother trying to test them - if you have an Intel GPU you can't even play 1.6.4 without massive graphical glitching unless you downgrade to a driver from years ago (which will soon, if not already, no longer be feasible since other software needs newer drivers - all "fixed-function" OpenGL rendering is actually being emulated since there is no hardware support of any kind - and as you know emulation is slower - but then why is the newer rendering system slower? (or, why have people made mods for Java, which runs in a virtual machine, that outperform Bedrock, written in native C++? Because the code you write is a major factor that affects performance*):
*Case in point, vanilla 1.6.4 gets only half the FPS because it renders chests less efficiently - even Optifine doesn't make this optimization (and is in fact basically useless outside of reducing the impact of chunk updates, which still had a significant impact on my old computer but I never saw evidence of its "doubles FPS" claim. Nor does it do anything with memory usage, vanilla/Optifine use 3x more memory on an otherwise empty Superflat world):
It depends on your use case, me and a friend have just finished filling 64 large chests of cobblestone, keep in mind this is only one resource out of the many that exist in the game right now.
We intend on filing up roughly as many chests of wood because some of these builds will be large
I'm anticipating the metropolis would be 10,000 x 10,000 blocks at the bare minimum in terms of North, East, South and West by the time it is finished, unless something interferes with the project, like ill health or death in real life, no second chances with that one. But the skyscrapers would end up being 128 blocks tall or a little more.
As for your comment about automation. no, they will not be mob grinders, these are chests stored in a warehouse with no hoppers attached to them, and the majority of the builds will not be using redstone, even though a couple of friends have built grinders out of the resources either they collected or I provided them in the past on the same world, I left it to their discretion where at most I would assist but not build it for them, even though I prefer not to do them and have not built any AFK farm on my current world on my own. I have found ways around the problem, even managed to keep my 11 Shulker boxes in my Ender Chest which I earned by going to End Cities and killing Shulkers, even without the Shulker farm a friend built I would've been just as happy with the 11 as it's plenty for storing travel loot IMO.
The city build is going to be a big undertaking. They will involve not only the use of stone, concrete and bricks for the structures themselves but also wood for interior design. Billboards and stadium scoreboards will have alternating redstone lamps which no doubt will cause massive fps drops.
As I said before, if the fps drops were down to 100fps minimum, from 200fps averages or higher, I would not care?
But when you experience lag spikes down to the 50 or less minimum, you do start to care because even with G Sync this becomes noticeable and distracting.
others have probably pointed this out but Minecraft is not indie anymore, they have 100s if not 1000s of people working in its favor and a company that is worth a few billion behind it.
Rollback Post to RevisionRollBack
''If you don't dig straight down in Minecraft then you're doing it wrong.''
But the game having issues isn't exclusive to Mojang, or to Minecraft, or to PC games in 2023. I saw this with Minecraft in 2012 when I first started playing, so it's not at all exclusive to today. I've seen this with countless other PC games over the span of decades, so it's not at all exclusive to Minecraft nor Mojang. If Minecraft and Mojang was the outlier and the rest of PC gaming throughout the years never had issues like this, I'd agree, but that's not the case.
And I'm sure this has been said before, with the reasons why, but you can't necessarily point to a landscape of mods (both content and technical/performance fixes) like it's somehow supposed to suggest Mojang could do better, because it doesn't work that way. Of course, Mojang COULD do better. But that's not mutually exclusive with the fact that expecting things to match what "could be" based on what the landscape of mods shows just isn't how things work in our world. Our world and system encourages this, in other worlds. "Good enough" rather than "most efficient" is often the target. I'm not saying that's how it should be in an ideal world; just saying how it is in ours.
Again, I've said it before, and in this very thread I think, but I absolutely do think Mojang could do better in performance. But I also said Minecraft by its nature is a CPU heavy game, and what's common is people trying to play a CPU heavy game on devices with less CPU power than typical gaming PCs have, which typically need less CPU power and more GPU power. So seeing this game have a reputation for being taxing is not at all surprising to me. And yes, I'm well aware devices that aren't necessarily lower powered also have issues. Again, neither here nor there. We don't have infinite hardware performance on tap so it's common thinking that "high end hardware" shouldn't have issues and it irritates me when I see people complain and use the excuse that they should always get perfect performance have no issues because they have "high end hardware", when that's an arbitrary thing.
But anyway, the subject of performance is very much a separate subject from what this thread is suggesting, which is more or less along the lines that Minecraft should just stop updating because it's not trying to change into whatever the thread starter's idea is of what it should become. And I highly disagree with that idea. And ironically, even if the thread starter did mention performance issues as one of the issues with the game (which it is!), their big thing was that triple A titles "do things better", yet newer triple A titles are having MAJOR issues with technical and performance matters on the PC side lately. The newest consoles have led to a big jump in hardware requirements. Shader compilation/stutter is and has been an increasing thing on PCs but not consoles (due to them being a fixed set of hardware). And so on.
As for the content, I don't find it lackluster. Relatively speaking, 1.13+ has been much improved compared to updates before that. Now maybe Minecraft isn't exactly the game I'd personally want it to be, but that's another subject altogether, and I definitely don't think content updates, which have only improved the game for me, should stop just because my own personal idea of what game should be might be different.
I'd just like to chime in on your post after time to think it through. We may be getting our wish soon, if this source is any indication we are getting at least some optimization with the next update, but it may only be related issues being caused in the 1.20 update for people in the snapshot, if this has nothing to do with earlier content then we have to wait still.
I do agree with you when you say gaming PC's typically do not have very good CPU's in them as the gamer budget is mostly focused on GPU hardware, which is why we don't often see PC's with more than 6 CPU cores in them, even the ones that do, they're not really suited for gaming as they tend to cause bottlenecks a lot of the time, such as AMD's Bulldozer and Piledriver CPU's, which as people had lamented about them, their per core performance is very low, perhaps not quite as bad as an AMD Jaguar CPU used in gaming consoles like the PS4 and Xbox One, but they were still bad for what they had been touted or marketed as at the time, in fact this is the reason for a brief time I went with Intel on my old PC build. Now that Ryzen architecture exists, AMD have caught up to Intel and there is no longer a wide performance gap as there once was, as benchmarks showed.
As time goes on we are seeing PC's with better CPU's in them, even mid range/build PC builds or PC builds worth less than 800$/£700 are getting better. Mojang needs to acknowledge this and they can't just keep expecting people to spend money upgrading their PC's forever, year after year like people often do with contract smartphones, the way they handle updates just isn't very good and it isn't simply the content of the games people complain about from time to time even though they too have their problems, for one thing some people didn't like the Phantoms, which were added as a result of a mob vote.
I just don't want more content being added to the game until the fundamental issues with existing content are patched first,
TMC has also complained about older versions of the game crashing for him in Java if I remember correct, which shouldn't happen either,
high end PC or not. Clearly there is a chance of the game crashing regardless of how well the game is programmed, because of other factors like hardware faults or interruptions, but when it constantly does so as a result of the game not functioning the way it should, then it's a legitimate complaint that does need to be resolved. It's not a free game, people paid for licensed copies of this and as paying customers they deserve a quality product.
Just goes to show that no matter how much hardware you throw at something, software efficiency matters.
And with hardware coming more and more into approaching limits on getting faster at times (specifically CPUs and specifically with difficulties shrinking nodes), the software side is going to be vital for improving things.
Just goes to show that no matter how much hardware you throw at something, software efficiency matters.
And with hardware coming more and more into approaching limits on getting faster at times (specifically CPUs and specifically with difficulties shrinking nodes), the software side is going to be vital for improving things.
Absolutely, there's only so many transistors that will physically fit on any chip before they can go no further. They are approaching this limit now, eventually when transistor sizes become too small then quantum tunneling begins to affect them and cause unpredictable and undesirable behaviour. At this point the only way to make PC's faster then would be to increase the physical size of the processors or add more CPU's in them, but PC's with 2 CPU's or more are typically not used for gaming and plus they're too expensive for most consumers to afford, so I don't see this being a viable option anytime soon.
Video card hardware will not escape the transistor size limitation problem either and will too suffer the same fate, beyond this it is up to software developers to put more thought into how they design their applications and not create unnecessary strain on hardware if it can be helped.
Many solutions have been proposed to get around this problem, including Nvidia's DLSS which is basically a clever way of upscaling and using AI to determine what an image should look like while running the game at a lower resolution. AMD's FSR does the same thing, these are not perfect solutions by any means, but they can boost frame rates considerably and even Minecraft has been shown to benefit from DLSS when switched on.
CPU just like everything else is subject to get bigger.Not enough space??? then they just make the LGA bigger in size we already seen innovation in this area with sockets like x79 back in the day and then in more modern times Thread Ripper.
Chiplets are also another area where this was the same reasoning behind and also why we got the Thread Ripper.
you will see that these were both cases where they were oversized sockets and once they worked out the tech they improved on implementing that tech back to smaller standards which is what they have done. Things get bigger and then smaller until the have to go bigger again.
I mean we already seen it for a long time AGP cards were bigger than GT but AGP was smaller than the previous PCI cards. GT cards were small and kept getting bigger, GTX got bigger and bigger, RTX still getting bigger its the same for AMD cards and eventually it has to be the same for CPU, with CPU we have already seen it in the past then it went away because they found other innovations and now its coming back because we need to utilize it.
They are trying to innovate in the most efficient ways they can so they don't want to make big computer parts unless they have to.
I mean we did go from computers that were the size of entire rooms down to desktop and portable PC's because of microcode and soon in the near future we will potentially need to go back to computers being the size of rooms again if the computational power requires that, no other way around it.
i don't think this is realistic but its just an extreme example.
I think a more realistic example would be if the LGA has to get bigger and to a point of being so big that everything else on the PCB has to accommodate then maybe we will see motherboards getting bigger and a new standard bigger than ATX full size.
imo we will plateau soon anyway because we will hit a wall where the need for faster parts is diminishing, there is realistically only so much we can do i think we are at the pinnacle of that understanding.
Unless some software or hardware comes from far way out of nowhere or we start rendering 4D or there is a warrant for more then there is a ceiling of computational needs vs warranted need to justify making parts bigger and faster, we do only have finite resources to make the parts too which is diminishing as well.
We are also on a slower timeline than people think in most cases in the milliseconds when comparing how fast parts are to older ones and 5ghz base is really only a new thing we have not been here for long.
Do we really need to get faster at this time? imo not really a good application can run on computers that a 10-15 years old with no performance issues due to good codebase and in games context good rendering practice. Take World of Warcraft as a good example here in that, that game can run on anything because it is coded well, PC's from 20 years ago can still run it.
The clause for faster bigger parts is a vertical progression. Computers do go horizontal in progression as well but there is need for them to go more horizontally as opposed to vertical which is the more dominate. Money is also obviously the driving factor because computers are a massive conglomerate so the need for new tech is heavily biased because of money.
computers are the driving force behind so much industry and this is a big reason why they tell we need to keep getting faster better computers, its not really the case though.
we will go back and forth between bigger and smaller until the parts need to get bigger again. This has been the case since we started using the Gregorian calendar and the word computer was coined which was well before the first computer even existed.
Rollback Post to RevisionRollBack
''If you don't dig straight down in Minecraft then you're doing it wrong.''
imo we will plateau soon anyway because we will hit a wall where the need for faster parts is diminishing, there is realistically only so much we can do i think we are at the pinnacle of that understanding.
Unless some software or hardware comes from far way out of nowhere or we start rendering 4D or there is a warrant for more then there is a ceiling of computational needs vs warranted need to justify making parts bigger and faster, we do only have finite resources to make the parts too which is diminishing as well.
We are also on a slower timeline than people think in most cases in the milliseconds when comparing how fast parts are to older ones and 5ghz base is really only a new thing we have not been here for long.
Do we really need to get faster at this time? imo not really a good application can run on computers that a 10-15 years old with no performance issues due to good codebase and in games context good rendering practice. Take World of Warcraft as a good example here in that, that game can run on anything because it is coded well, PC's from 20 years ago can still run it.
The clause for faster bigger parts is a vertical progression. Computers do go horizontal in progression as well but there is need for them to go more horizontally as opposed to vertical which is the more dominate. Money is also obviously the driving factor because computers are a massive conglomerate so the need for new tech is heavily biased because of money.
computers are the driving force behind so much industry and this is a big reason why they tell we need to keep getting faster better computers, its not really the case though.
we will go back and forth between bigger and smaller until the parts need to get bigger again. This has been the case since we started using the Gregorian calendar and the word computer was coined which was well before the first computer even existed.
imo 5ghz CPU's will also need to become the norm for gaming CPU's, not just having more than 4 cores. But processors like this even with current gen architecture need high quality heatsinks installed. Increased level 3 cache sizes do help as well, but it's generally better to have faster cores, more cores only matter if the programs you're using is able to use them properly, Minecraft can use multicore, but I've only seen multicore usage increase when chunk loading occurred, then after the chunk updates finish, it places the majority of the burden on a single core for some reason, even though I think things like redstone and entities/mob AI should be offloaded onto other cores where possible.
The problem is keeping CPU's heat below the tjunction for long periods of time, thermal stress is a big problem with overclocking but it can also become a problem over time with stock clock CPU's, it happens more slowly, but it's inevitable because the continuous heating of hardware creates wear and tear, so there is obviously a hard limit on how high clock speeds can go before it becomes too dangerous or impractical.
I agree that hardware will eventually have to increase in physical sizes again when more power becomes necessary, but in a video game like Minecraft as bedrock edition on Nvidia RTX GPU's shown, playable frame rates are possible on current gen hardware even with ray tracing enabled, although at 4k resolutions I think that's asking for too much at this time, a lot of games benchmarked today including Cyberpunk 2077 do struggle to be made playable no matter how good the video card is, I'm not saying it can't be done, but it is difficult to get there.
In my opinion the sweet spot for PC gaming as far as resolution goes is 1440p or 1080p, as it is easier to achieve playable frame rates with the settings cranked up and monitors with those native resolutions are generally more affordable.
1440p monitors did help even in Minecraft, I noticed the zigzag artifacts happened far less often on distant objects, making items like wheat fields and other crops look more defined and honestly I am happy with the monitor I have, even though I am sure a 4k display of the same size would be better, I can't wait to see what it looks like with ray tracing, which is my intention by the time I get an AMD 6700XT which I managed to pick up for a good deal on Amazon. Even if it's not playable with ray tracing, I can at least get some screenshots and short videos with it on.
yeah faster cores are better, multi core means nothing in many cases on a per application basis like most games really only need a few cores/threads not 20 or more.
The only time you really benefit from big multi cores is with the OS and if you want to run a gazillion apps at once
The first computers where analog engines with moving/rotating parts big in scale they started getting smaller and then bigger again.Then we got away from those and computers became electronic and they were the size of entire rooms which kept getting bigger and smaller so much so that they could eventually fit onto a table and then those table computers have also had a long history of going bigger and smaller and bigger again.
yeah like we already run them at boiling point of water now well their max TDP can go up to that point, it translates differently because its not water but thermal dynamics is also another area where its just going to get to impractical and we will hit a wall
we will need to have really big LGA and really big heat sinks to accommodate so there is a line drawn and definite ceiling.
I like my big full size ATX computers but i don't really want them getting much bigger and i also don't like small computers either just a personal preference like mini ATX is kind of gross even though i have all sizes ITX and nano too in my collection. I like all computers but i think ATX is big enough tbh.
Rollback Post to RevisionRollBack
''If you don't dig straight down in Minecraft then you're doing it wrong.''
Obfuscation is a really funny thing you know? because they give us their mappings, I can access the de-obfuscated code using mappings provided by Mojang.
Honestly though this game has been receiving updates for way too long and they need to make a new game.
Incidentally, the game supports up to 255 background threads for things like world generation:
Various background tasks including worldgen are executed on a background thread pool. Its size equals the amount of available CPU threads minus one, but there was an upper limit of 7. Now this upper limit is 255. This should help higher-end machines with world-gen performance.
At the same time, I've seen people complain about how long it takes to generate a new world - I don't use any multithreading at all and can generate one in 2-3 seconds, and in only half the time of vanilla 1.6.4 despite being so much more complex (even more so than 1.19, which does have twice the ground depth to generate but if they implemented it the way I did in my "double/triple height terrain" mods it wouldn't add much to the time* since simply filling in the data with stone is far less expensive than generating 3D noise/heightmaps, which were only being used for the "vanilla" terrain on top of the extended depth).
*Example, this was a Superflat world set to a "Mega Forest" biome and a depth of 127 layers, 64 more than default:
[18:44:05] 2023-04-24 18:44:05 [SERVER] [INFO] Starting integrated minecraft server version 1.6.4
2023-04-24 18:44:05 [SERVER] [INFO] Generating keypair
2023-04-24 18:44:06 [SERVER] [INFO] Preparing start region for level 0
2023-04-24 18:44:07 [SERVER] [INFO] Preparing spawn area: 20%
2023-04-24 18:44:08 [SERVER] [INFO] Preparing spawn area: 59%
2023-04-24 18:44:09 [SERVER] [INFO] TheMasterCaver[/127.0.0.1:0] logged in with entity id 138 at (-95.5, 127.0, -104.5)
2023-04-24 18:44:09 [SERVER] [INFO] TheMasterCaver joined the game
In TMCW Superflat worlds can have all normal features, including caves, as I use the same code to generate them as default worlds, except for the terrain:
Preset code; note the "cave" and "animal" options, the latter enables normal passive mob spawning during world generation so there will be 100+ mobs, not the 10 that can spawn post-generation ("village" and "biome_1" have no effect since they can't generate in Mega Forest, biome ID 41, this is actually just the "Overworld" preset changed from Plains and 64 added to the stone layer):
Note that caves are not scaled up to the new deeper ground, unlike in my "double/triple height terrain" mods, the highest are around y=95 with a few rarely as high as the surface, with most below y=64; scaling up caves would not have a big impact on generation time though, and generating all those tunnels is more expensive than a few much larger caves since the biggest cost is tracing out the path of every tunnel within a 12 chunk radius of every chunk being generated, not the actual carving within that single chunk:
Yes, it still only took 4 seconds to generate and load in the world - not just with twice the ground but massive trees up to 64 blocks tall on top of that! Even the latter alone causes major issues with many mods that add giant trees, and often just make their leaves fully transparent to avoid having to calculate lighting, but not in my case, which is actually faster despite having all normal lighting, which shows just how much more optimized my code/1/6/4 itself is (granted it used ti take about half a minute to generate a Mega Forest world on my old computer, but this was before any optimizations to lighting, or much else, and that computer had a CPU from 2005, hardly fast at all by current standards):
In addition, by default, Conifer leaves don't diffuse light. While this is also configurable, enabling light diffusing is not recommended either because it causes mob spawns during the day, and also causes extreme lag. Generating a spawn within a Redwoods biome takes 5 seconds without light diffusing, and 40 seconds with light diffusing. With light diffusing enabled, there is also extreme worldgen lag in general, with the exploration of a single player dropping TPS to 4. So, keep your TPS high, and let leaves pass light through.
All this is also with only two main threads and no "worker" threads other than a file I/O thread to save chunks, a thread for the sound system, and the JVM's own threads for JIT and garbage collection (VisualVM shows around 16 active threads in total). By contrast, modern versions also use multiple threads for lighting and rendering (at least for chunk updates, not sure how many threads but even just one means the main thread doesn't have to do the work).
Also, I generated a second world with the render distance set to 16, which affects the size of the initial spawn area, which is the width of the render distance plus 1, or 35x35 chunks - even then it still took only 7 seconds to generate twice as many chunks as vanilla does (625):
2023-04-24 19:26:48 [SERVER] [INFO] Starting integrated minecraft server version 1.6.4
2023-04-24 19:26:48 [SERVER] [INFO] Generating keypair
2023-04-24 19:26:49 [SERVER] [INFO] Preparing start region for level 0
2023-04-24 19:26:50 [SERVER] [INFO] Preparing spawn area: 19%
2023-04-24 19:26:51 [SERVER] [INFO] Preparing spawn area: 34%
2023-04-24 19:26:52 [SERVER] [INFO] Preparing spawn area: 49%
2023-04-24 19:26:53 [SERVER] [INFO] Preparing spawn area: 64%
2023-04-24 19:26:54 [SERVER] [INFO] Preparing spawn area: 77%
2023-04-24 19:26:55 [SERVER] [INFO] Preparing spawn area: 92%
2023-04-24 19:26:55 [SERVER] [INFO] TheMasterCaver[/127.0.0.1:0] logged in with entity id 8222 at (-92.5, 127.0, -98.5)
2023-04-24 19:26:55 [SERVER] [INFO] TheMasterCaver joined the game
Also, even at these settings the game still runs within a memory limit of only 512 MB, in contrast with the minimum of 2 GB for current versions, and I doubt that the average height of 1.18 is 192 blocks (ground plus terrain and features; this ore distribution chart shows that terrain rapidly becomes less common above sea level, with very little above y=128, equivalent to y=192 (note that coal becomes much more common above y=128 on the last chart but the first one is the actual number of blocks and there is only a very small peak, showing how little actual terrain there is up there):
An analysis of the world, which includes 131 different blocks (including all variants), 672 entities, and 123 tile entities; there were also more than 3.1 million leaves (161:0 is Mega Tree leaves); the average chunk size was 9931 bytes, which is larger than the average given for a 1.18 world, 8910 bytes (chunk size is mainly dependent on the overall complexity, even THT averaged less despite being deeper since it only had vanilla 1.6.4 blocks, surface terrain/features, and overall composition):
Dimensions: 560W x 560L x 256H (80281600 blocks) / 1225 chunks
Incidentally, the game supports up to 255 background threads for things like world generation:
At the same time, I've seen people complain about how long it takes to generate a new world - I don't use any multithreading at all and can generate one in 2-3 seconds, and in only half the time of vanilla 1.6.4 despite being so much more complex (even more so than 1.19, which does have twice the ground depth to generate but if they implemented it the way I did in my "double/triple height terrain" mods it wouldn't add much to the time* since simply filling in the data with stone is far less expensive than generating 3D noise/heightmaps, which were only being used for the "vanilla" terrain on top of the extended depth).
*Example, this was a Superflat world set to a "Mega Forest" biome and a depth of 127 layers, 64 more than default:
In TMCW Superflat worlds can have all normal features, including caves, as I use the same code to generate them as default worlds, except for the terrain:
Preset code; note the "cave" and "animal" options, the latter enables normal passive mob spawning during world generation so there will be 100+ mobs, not the 10 that can spawn post-generation ("village" and "biome_1" have no effect since they can't generate in Mega Forest, biome ID 41, this is actually just the "Overworld" preset changed from Plains and 64 added to the stone layer):
Note that caves are not scaled up to the new deeper ground, unlike in my "double/triple height terrain" mods, the highest are around y=95 with a few rarely as high as the surface, with most below y=64; scaling up caves would not have a big impact on generation time though, and generating all those tunnels is more expensive than a few much larger caves since the biggest cost is tracing out the path of every tunnel within a 12 chunk radius of every chunk being generated, not the actual carving within that single chunk:
Yes, it still only took 4 seconds to generate and load in the world - not just with twice the ground but massive trees up to 64 blocks tall on top of that! Even the latter alone causes major issues with many mods that add giant trees, and often just make their leaves fully transparent to avoid having to calculate lighting, but not in my case, which is actually faster despite having all normal lighting, which shows just how much more optimized my code/1/6/4 itself is (granted it used ti take about half a minute to generate a Mega Forest world on my old computer, but this was before any optimizations to lighting, or much else, and that computer had a CPU from 2005, hardly fast at all by current standards):
All this is also with only two main threads and no "worker" threads other than a file I/O thread to save chunks, a thread for the sound system, and the JVM's own threads for JIT and garbage collection (VisualVM shows around 16 active threads in total). By contrast, modern versions also use multiple threads for lighting and rendering (at least for chunk updates, not sure how many threads but even just one means the main thread doesn't have to do the work).
Also, I generated a second world with the render distance set to 16, which affects the size of the initial spawn area, which is the width of the render distance plus 1, or 35x35 chunks - even then it still took only 7 seconds to generate twice as many chunks as vanilla does (625):
Also, even at these settings the game still runs within a memory limit of only 512 MB, in contrast with the minimum of 2 GB for current versions, and I doubt that the average height of 1.18 is 192 blocks (ground plus terrain and features; this ore distribution chart shows that terrain rapidly becomes less common above sea level, with very little above y=128, equivalent to y=192 (note that coal becomes much more common above y=128 on the last chart but the first one is the actual number of blocks and there is only a very small peak, showing how little actual terrain there is up there):
An analysis of the world, which includes 131 different blocks (including all variants), 672 entities, and 123 tile entities; there were also more than 3.1 million leaves (161:0 is Mega Tree leaves); the average chunk size was 9931 bytes, which is larger than the average given for a 1.18 world, 8910 bytes (chunk size is mainly dependent on the overall complexity, even THT averaged less despite being deeper since it only had vanilla 1.6.4 blocks, surface terrain/features, and overall composition):
Dimensions: 560W x 560L x 256H (80281600 blocks) / 1225 chunks
That's with default textures though, I am pretty sure with HD texture packs and ray tracing the memory requirements would increase well above 512 megabytes according to the version of the game you loaded or tested. High quality shaders and dynamic lighting are demanding even in bedrock edition.
That's not getting into how much more would be required if say somebody decided to use a render distance greater than 16 chunks,
or alternatively 256 blocks from player position. 160 + 96 is 256 right?
It's no secret that Minecraft uses a lot of memory to run, if it wasn't then 2gb wouldn't be the default setting in Java edition.
and the more memory the game uses the worse it'll perform, because as you said the CPU has to do garbage collection to remove unnecessary or useless information. It only makes sense for the game to store what is required at that time whether that be loaded chunks or player inventory etc, which gets written to a hard drive or SSD so players keep their progress in the game.
I'm no expert but I have learned that CPU usage goes down if loading from a disk instead of generating new terrain,
that is the reason why some people like to pregenerate their worlds, which I think is a waste of storage space unless you know for certain you'll be exploring that much of a world in your lifetime. It only makes sense to generate what's needed, adding additional writes to an SSD is not only wasteful, it can shorten the life expectancy of the storage drive if that data constantly gets overwritten. Pregen worlds aren't even allowed on bedrock edition officially.
I do wish that bedrock edition had a world border system though.
I would like to use one to prevent the save file getting excessively large.
I'd like to limit mine to 80,000 x 80,000 blocks, including other dimensions, to stop the world save file from exceeding the capacity of my 500gb SSD on the server.
It's no secret that Minecraft uses a lot of memory to run, if it wasn't then 2gb wouldn't be the default setting in Java edition.
And this claim is invalid because "default" settings include 16x textures, no shaders (which Java doesn't even support without mods), and a render distance of 12 chunks, which is only about half the area of 16:
Fun fact: the default render distance has always been 12 chunks, except old versions called it "Far" (for a time I thought it was 16 but the source, for both 1.6 and older versions, limits the width of the visible area to 400 blocks, or 25 chunks, the same width a render distance of 12 gives, or 12 * 2 + 1).
Even if I added the ability to set it higher it would still have no effect on the normal requirements, which are incredibly low by modern standards; as I said before, I've added hundreds, if not over a thousand features yet the game is no more demanding than vanilla 1.6.4, a very old a lightweight version of the game which would run on 20 year old computers, and this is in favor of vanilla because it is loading less than half as many chunks:, yet somehow it uses just as much memory and gets less FPS:
TMCWv5 adds more new content than all previous versions combined, bringing the total number of features added by all versions to 325 blocks, 103 biomes, 45 entities/variants, 34 items, and 7 enchantments, in addition to dozens of new structures, trees, changes to game mechanics, and more.
Also, my computer is hardly new at all - in fact, the CPU came out 11 years ago and I can see at least double, if not more, the performance on the latest hardware, even if not top-end:
This is all you needed to run 1.6, according to Mojang themselves - and literally the only addition since that has impacted resource usage is the deeper underground (this is also why Amplified says/said it needs a "beefy computer", yet my modded "extreme mountains" biomes surely have to be more extreme even if locally, but large enough to fill the entire loaded area):
Minimum Requirements:
CPU : Intel P4/NetBurst Architecture or its AMD Equivalent (AMD K7)
RAM : 2GB
GPU : Intel GMA 950 or AMD Equivalent with OpenGL 1.2 Support
HDD : At least 90MB for Game Core and Sound Files
Java Runtime Environment (JRE) 6 or up is required to be able to run the game.
Recommended Requirements:
CPU : Intel Pentium D or AMD Athlon 64 (K8) 2.6 GHz
RAM : 4GB
GPU : GeForce 6xxx or ATI Radeon 9xxx and up with OpenGL 2 Support (Excluding Integrated Chipsets)
HDD : 150MB
This is a comparison of the recommended CPUs to what I had in my old computer - which outperformed them by a factor of about 2 and even single-thread performance was significantly better despite running at only 2.2 GHz (they probably did just mean "Athlon 64" and not "Athlon 64 X2" given the difference in performance):
I also had no issues with 3 GB of RAM, with about 1 GB still free while the game was running (I did have some memory issues due to having a 32 bit OS, which limited the per-process usage to about 1.5 GB, but that was resolved by allocating less memory, as I've shown before 512 MB is more than enough and back then Optifine actually recommended allocating even less, just 350 MB; I don't know who started it but the "more RAM = more FPS" myth is just that):
Tips and tricks
Lauch Minecraft with less memory (yes, really). Usually it does not need more than 350 MB and runs fine on all settings with the default texture pack. By default java allocates way too much memory (1GB) which may get swapped to disk and the overall performance may suffer a lot.
Again, as I've said countless times (I'm getting really tired and frustrated) the issues with newer versions are due to unimaginably bad coding practices - sp614x is absolutely correct when they claim that 90% of the memory they use / garbage creation is completely unnecessary (even if they only created 10x as much memory churn the JVM still needs more memory so the garbage collector doesn't need to run so often):
Why is 1.8 allocating so much memory?
This is the best part - over 90% of the memory allocation is not needed at all. Most of the memory is probably allocated to make the life of the developers easier.
- There are huge amounts of objects which are allocated and discarded milliseconds later.
- All internal methods which used parameters (x, y, z) are now converted to one parameter (BlockPos) which is immutable. So if you need to check another position around the current one you have to allocate a new BlockPos or invent some object cache which will probaby be slower. This alone is a huge memory waste.
- The chunk loading is allocating a lot of memory just to pass vertex data around. The excuse is probably "mutithreading", however this is not necessary at all (see the last OptiFine for 1.7).
- the list goes on and on ...
Over 4 million chunks, 1 billion blocks per layer... such is the incredible feats the game could perform if properly optimized and using modern rendering methods like level of detail, which is the most important factor here (past some distance even an entire chunk will be too small to make out, so why even render it at full detail, even multiple chunks could be rendered as a simple cube with a single blended color, not even an actual texture - this is exactly what most games do, but not Minecraft - it is otherwise impossible to render that much on any hardware, maybe requiring terabytes of VRAM. If you watch the video you can see the detail increase as they get closer).
And this claim is invalid because "default" settings include 16x textures, no shaders (which Java doesn't even support without mods), and a render distance of 12 chunks, which is only about half the area of 16:
Fun fact: the default render distance has always been 12 chunks, except old versions called it "Far" (for a time I thought it was 16 but the source, for both 1.6 and older versions, limits the width of the visible area to 400 blocks, or 25 chunks, the same width a render distance of 12 gives, or 12 * 2 + 1).
Even if I added the ability to set it higher it would still have no effect on the normal requirements, which are incredibly low by modern standards; as I said before, I've added hundreds, if not over a thousand features yet the game is no more demanding than vanilla 1.6.4, a very old a lightweight version of the game which would run on 20 year old computers, and this is in favor of vanilla because it is loading less than half as many chunks:, yet somehow it uses just as much memory and gets less FPS:
Also, my computer is hardly new at all - in fact, the CPU came out 11 years ago and I can see at least double, if not more, the performance on the latest hardware, even if not top-end:
This is all you needed to run 1.6, according to Mojang themselves - and literally the only addition since that has impacted resource usage is the deeper underground (this is also why Amplified says/said it needs a "beefy computer", yet my modded "extreme mountains" biomes surely have to be more extreme even if locally, but large enough to fill the entire loaded area):
This is a comparison of the recommended CPUs to what I had in my old computer - which outperformed them by a factor of about 2 and even single-thread performance was significantly better despite running at only 2.2 GHz (they probably did just mean "Athlon 64" and not "Athlon 64 X2" given the difference in performance):
I also had no issues with 3 GB of RAM, with about 1 GB still free while the game was running (I did have some memory issues due to having a 32 bit OS, which limited the per-process usage to about 1.5 GB, but that was resolved by allocating less memory, as I've shown before 512 MB is more than enough and back then Optifine actually recommended allocating even less, just 350 MB; I don't know who started it but the "more RAM = more FPS" myth is just that):
Again, as I've said countless times (I'm getting really tired and frustrated) the issues with newer versions are due to unimaginably bad coding practices - sp614x is absolutely correct when they claim that 90% of the memory they use / garbage creation is completely unnecessary (even if they only created 10x as much memory churn the JVM still needs more memory so the garbage collector doesn't need to run so often):
(though I can't imagine how it can be "easier" for the developers when you have literally 10x as much code complexity as well)
Also, imagine a render distance of 1024 chunks - no, not blocks, chunks:
Over 4 million chunks, 1 billion blocks per layer... such is the incredible feats the game could perform if properly optimized and using modern rendering methods like level of detail, which is the most important factor here (past some distance even an entire chunk will be too small to make out, so why even render it at full detail, even multiple chunks could be rendered as a simple cube with a single blended color, not even an actual texture - this is exactly what most games do, but not Minecraft - it is otherwise impossible to render that much on any hardware, maybe requiring terabytes of VRAM. If you watch the video you can see the detail increase as they get closer).
I said the memory requirements increase if texture packs and shaders are used, not the default x16 textures which are as you mentioned.
If you use the default setting, which implies no mods, just vanilla Minecraft then the memory requirements should not be very high at all, but unfortunately because of bad optimization of the more recent versions, it can.
1024 chunk render distance if it could be achieved in the vanilla game, which has nothing to do with shaders or texture packs, if it could be done with minimal impact on performance, would be an incredible feat I agree with you, but it hasn't yet been done in vanilla Minecraft, with the maximum official supported render distance supported in 64bit Java being 32 chunks without mods. https://minecraft.fandom.com/wiki/Options
I said before I would stay with a limit of 64 chunks render distance on my worlds if I could do it without any major performance hit, As Princess_Garnet pointed out before, seeing an excessive number of biome transitions becomes a problem when you crank up the render distances too high. Even if performance weren't the issue, I could be satisfied with 64 chunks which is still quite a lot of blocks from your position, 1024 blocks, with this I have been able to see End cities in the End on the outer islands without even having to bridge the void first.
You've got a good point about not rendering the distant chunks at full detail, that would be a great way to get the excessive hardware usage down, minimize or reduce the rendering/texture detail of objects most distant from the player, also reducing the number of entities being simulated or ticked in regions furthest away from a player would help reduce CPU load as well, not just render distance, but the ticking radius which affects redstone, mob AI, crop growth and what not. All of these things should be cut back on when the player is nowhere near them.
Does a player really need a fully automated redstone contraption to work at 128 blocks away? keep in mind bedrock edition supports a simulation distance of up to 12 chunks which is 192 blocks. Unless it's something like a minecart with chest, which may in some circumstances be useful in sending items to other players on multiplayer servers, the amount of processing redstone is allowed to create on any given world should be cut down.
1440p monitors did help even in Minecraft, I noticed the zigzag artifacts happened far less often on distant objects, making items like wheat fields and other crops look more defined
This is because the cause of that is ultimately "not enough resolution to resolve the final result without side effects", for a lack of better words. It's what often causes the "moire pattern" effect.
This same reason is a big part of why very far render distances are in diminishing returns territory for me. It's not just a performance thing. It's not even just a "too many biome transitions visually close together" thing either. It's simply too much information in too little amount of space. Once you get to higher distances, you're squeezing so much information into a smaller and smaller portion of (relative) screen space. It's diminished returns. I feel like this is already the case at "just" a render distance of 32 chunks. I feel like the 24 to 32 range would be the ideal range for me, but I'm not saying higher has no benefit ever. With elytra and at high altitudes especially, it may become more worthwhile. But on the ground? Underground? In the nether? It's very much into that "wasteful" territory of diminished returns for me.
And in some cases (like in the end), I find it would even be preferred not to have a high render distance. Like, I wouldn't want to see the outer parts of the end from the central island. It is supposed to feel isolated from the rest. Render distances allowing me to see the outer end from the center island would break the immersion of that part of the game for me (especially prior to defeating the dragon, but even after that). Same for spotting multiple end cities from so far. I feel that takes away from some of the immersion and some point of the game. But I realize that's my own preferences and some people might not mind those things.
In real life, the curvature of the planet and "biomes" not being so small makes far render distances in the game less appealing too. And I know, I know, games shouldn't mimic real life, but I find it's still a reason why I find far render distances unappealing after a point.
You've got to admit though some of the updates we've been getting have been lackluster, and mods in Java edition do demonstrate that more performance could be squeezed out of the current vanilla game before it would become fair to tell people getting lag that they just don't have a good PC.
As been mentioned before since PC's do not have unlimited resources Mojang need to look at areas of the game which could be done more efficiently.
Why are chests, blocks which remain mostly static or inanimate until used, such a huge burden? and why can't some of the rendering of them be offloaded to the GPU or other CPU cores? they shouldn't be causing fps drops anywhere near as severe as they do, the fact that they do cause people problems is reason enough to expect Mojang to take a look into it to see what could be done.
I'd rather the existing issues with the game be patched first before they start piling on more content in the game,
it seems the more updates we get the more bloated the game becomes, 1.18 was a clear example, although to be fair,
there is more terrain generating in Caves and Cliffs which did make the increased hardware demands logical.
We should just go back to having chests be blocks like they were in the beta days tbh
Except, do you really need thousands of chests? This is a cube of 4096 chests - even my first world, with millions of resources accumulated over half a year of playtime, most of it spent caving, only has 625, which is capable of storing more than a million individual items and nearly 10 million resources stored as blocks of 9 each (around 1/10 of chests are used for unstackable items like enchanted books, music discs, diamond horse armor, and potions dropped by witches. Many of the rest are also empty as I create corridors of 16 double chests/32 chests at a time); if you really need more you must be using automated farms to collect resources in quantities that you could never hope to use (the same even goes for myself, with all manually collected resources, as evidenced by the need for such large-scale storage of millions of items, over 2 million coal and 3/4 of a million iron and even over half a double chest of diamond blocks):
Tile entities are outlined in yellow; the bar at the top is the distance from one end to the other (it looks longer due to the perspective, the underground storage area is at y=58), which is more than the 64 block render distance of tile entities so not every one will be visible unless you are looking down from high in the air, mitigating the impact of rendering them (the entire area is a single layer, as opposed to a 3D structure like the cube of chests):
If they are that much worse in newer versions then that's just another sign of how bad their coding is, especially since 1.17 completely eradicated the use of ancient OpenGL rendering that was already obsolete the day Notch started making the game! - and is responsible for so many graphical issues on older versions as driver makers don't bother trying to test them - if you have an Intel GPU you can't even play 1.6.4 without massive graphical glitching unless you downgrade to a driver from years ago (which will soon, if not already, no longer be feasible since other software needs newer drivers - all "fixed-function" OpenGL rendering is actually being emulated since there is no hardware support of any kind - and as you know emulation is slower - but then why is the newer rendering system slower? (or, why have people made mods for Java, which runs in a virtual machine, that outperform Bedrock, written in native C++? Because the code you write is a major factor that affects performance*):
*Case in point, vanilla 1.6.4 gets only half the FPS because it renders chests less efficiently - even Optifine doesn't make this optimization (and is in fact basically useless outside of reducing the impact of chunk updates, which still had a significant impact on my old computer but I never saw evidence of its "doubles FPS" claim. Nor does it do anything with memory usage, vanilla/Optifine use 3x more memory on an otherwise empty Superflat world):
TheMasterCaver's First World - possibly the most caved-out world in Minecraft history - includes world download.
TheMasterCaver's World - my own version of Minecraft largely based on my views of how the game should have evolved since 1.6.4.
Why do I still play in 1.6.4?
Right, and I agree with that.
But the game having issues isn't exclusive to Mojang, or to Minecraft, or to PC games in 2023. I saw this with Minecraft in 2012 when I first started playing, so it's not at all exclusive to today. I've seen this with countless other PC games over the span of decades, so it's not at all exclusive to Minecraft nor Mojang. If Minecraft and Mojang was the outlier and the rest of PC gaming throughout the years never had issues like this, I'd agree, but that's not the case.
And I'm sure this has been said before, with the reasons why, but you can't necessarily point to a landscape of mods (both content and technical/performance fixes) like it's somehow supposed to suggest Mojang could do better, because it doesn't work that way. Of course, Mojang COULD do better. But that's not mutually exclusive with the fact that expecting things to match what "could be" based on what the landscape of mods shows just isn't how things work in our world. Our world and system encourages this, in other worlds. "Good enough" rather than "most efficient" is often the target. I'm not saying that's how it should be in an ideal world; just saying how it is in ours.
Again, I've said it before, and in this very thread I think, but I absolutely do think Mojang could do better in performance. But I also said Minecraft by its nature is a CPU heavy game, and what's common is people trying to play a CPU heavy game on devices with less CPU power than typical gaming PCs have, which typically need less CPU power and more GPU power. So seeing this game have a reputation for being taxing is not at all surprising to me. And yes, I'm well aware devices that aren't necessarily lower powered also have issues. Again, neither here nor there. We don't have infinite hardware performance on tap so it's common thinking that "high end hardware" shouldn't have issues and it irritates me when I see people complain and use the excuse that they should always get perfect performance have no issues because they have "high end hardware", when that's an arbitrary thing.
But anyway, the subject of performance is very much a separate subject from what this thread is suggesting, which is more or less along the lines that Minecraft should just stop updating because it's not trying to change into whatever the thread starter's idea is of what it should become. And I highly disagree with that idea. And ironically, even if the thread starter did mention performance issues as one of the issues with the game (which it is!), their big thing was that triple A titles "do things better", yet newer triple A titles are having MAJOR issues with technical and performance matters on the PC side lately. The newest consoles have led to a big jump in hardware requirements. Shader compilation/stutter is and has been an increasing thing on PCs but not consoles (due to them being a fixed set of hardware). And so on.
As for the content, I don't find it lackluster. Relatively speaking, 1.13+ has been much improved compared to updates before that. Now maybe Minecraft isn't exactly the game I'd personally want it to be, but that's another subject altogether, and I definitely don't think content updates, which have only improved the game for me, should stop just because my own personal idea of what game should be might be different.
It depends on your use case, me and a friend have just finished filling 64 large chests of cobblestone, keep in mind this is only one resource out of the many that exist in the game right now.
We intend on filing up roughly as many chests of wood because some of these builds will be large
I'm anticipating the metropolis would be 10,000 x 10,000 blocks at the bare minimum in terms of North, East, South and West by the time it is finished, unless something interferes with the project, like ill health or death in real life, no second chances with that one. But the skyscrapers would end up being 128 blocks tall or a little more.
As for your comment about automation. no, they will not be mob grinders, these are chests stored in a warehouse with no hoppers attached to them, and the majority of the builds will not be using redstone, even though a couple of friends have built grinders out of the resources either they collected or I provided them in the past on the same world, I left it to their discretion where at most I would assist but not build it for them, even though I prefer not to do them and have not built any AFK farm on my current world on my own. I have found ways around the problem, even managed to keep my 11 Shulker boxes in my Ender Chest which I earned by going to End Cities and killing Shulkers, even without the Shulker farm a friend built I would've been just as happy with the 11 as it's plenty for storing travel loot IMO.
The city build is going to be a big undertaking. They will involve not only the use of stone, concrete and bricks for the structures themselves but also wood for interior design. Billboards and stadium scoreboards will have alternating redstone lamps which no doubt will cause massive fps drops.
As I said before, if the fps drops were down to 100fps minimum, from 200fps averages or higher, I would not care?
But when you experience lag spikes down to the 50 or less minimum, you do start to care because even with G Sync this becomes noticeable and distracting.
others have probably pointed this out but Minecraft is not indie anymore, they have 100s if not 1000s of people working in its favor and a company that is worth a few billion behind it.
I'd just like to chime in on your post after time to think it through. We may be getting our wish soon, if this source is any indication we are getting at least some optimization with the next update, but it may only be related issues being caused in the 1.20 update for people in the snapshot, if this has nothing to do with earlier content then we have to wait still.
I do agree with you when you say gaming PC's typically do not have very good CPU's in them as the gamer budget is mostly focused on GPU hardware, which is why we don't often see PC's with more than 6 CPU cores in them, even the ones that do, they're not really suited for gaming as they tend to cause bottlenecks a lot of the time, such as AMD's Bulldozer and Piledriver CPU's, which as people had lamented about them, their per core performance is very low, perhaps not quite as bad as an AMD Jaguar CPU used in gaming consoles like the PS4 and Xbox One, but they were still bad for what they had been touted or marketed as at the time, in fact this is the reason for a brief time I went with Intel on my old PC build. Now that Ryzen architecture exists, AMD have caught up to Intel and there is no longer a wide performance gap as there once was, as benchmarks showed.
As time goes on we are seeing PC's with better CPU's in them, even mid range/build PC builds or PC builds worth less than 800$/£700 are getting better. Mojang needs to acknowledge this and they can't just keep expecting people to spend money upgrading their PC's forever, year after year like people often do with contract smartphones, the way they handle updates just isn't very good and it isn't simply the content of the games people complain about from time to time even though they too have their problems, for one thing some people didn't like the Phantoms, which were added as a result of a mob vote.
I just don't want more content being added to the game until the fundamental issues with existing content are patched first,
TMC has also complained about older versions of the game crashing for him in Java if I remember correct, which shouldn't happen either,
high end PC or not. Clearly there is a chance of the game crashing regardless of how well the game is programmed, because of other factors like hardware faults or interruptions, but when it constantly does so as a result of the game not functioning the way it should, then it's a legitimate complaint that does need to be resolved. It's not a free game, people paid for licensed copies of this and as paying customers they deserve a quality product.
Minecraft update fixes lag spikes and archaeological mistake | PCGamesN
Just goes to show that no matter how much hardware you throw at something, software efficiency matters.
And with hardware coming more and more into approaching limits on getting faster at times (specifically CPUs and specifically with difficulties shrinking nodes), the software side is going to be vital for improving things.
ok but AAA is better than indie? Ever herd of pizza tower?????
NamePerson was_taken
Absolutely, there's only so many transistors that will physically fit on any chip before they can go no further. They are approaching this limit now, eventually when transistor sizes become too small then quantum tunneling begins to affect them and cause unpredictable and undesirable behaviour. At this point the only way to make PC's faster then would be to increase the physical size of the processors or add more CPU's in them, but PC's with 2 CPU's or more are typically not used for gaming and plus they're too expensive for most consumers to afford, so I don't see this being a viable option anytime soon.
Video card hardware will not escape the transistor size limitation problem either and will too suffer the same fate, beyond this it is up to software developers to put more thought into how they design their applications and not create unnecessary strain on hardware if it can be helped.
Many solutions have been proposed to get around this problem, including Nvidia's DLSS which is basically a clever way of upscaling and using AI to determine what an image should look like while running the game at a lower resolution. AMD's FSR does the same thing, these are not perfect solutions by any means, but they can boost frame rates considerably and even Minecraft has been shown to benefit from DLSS when switched on.
CPU just like everything else is subject to get bigger.Not enough space??? then they just make the LGA bigger in size we already seen innovation in this area with sockets like x79 back in the day and then in more modern times Thread Ripper.
Chiplets are also another area where this was the same reasoning behind and also why we got the Thread Ripper.
you will see that these were both cases where they were oversized sockets and once they worked out the tech they improved on implementing that tech back to smaller standards which is what they have done. Things get bigger and then smaller until the have to go bigger again.
I mean we already seen it for a long time AGP cards were bigger than GT but AGP was smaller than the previous PCI cards. GT cards were small and kept getting bigger, GTX got bigger and bigger, RTX still getting bigger its the same for AMD cards and eventually it has to be the same for CPU, with CPU we have already seen it in the past then it went away because they found other innovations and now its coming back because we need to utilize it.
They are trying to innovate in the most efficient ways they can so they don't want to make big computer parts unless they have to.
I mean we did go from computers that were the size of entire rooms down to desktop and portable PC's because of microcode and soon in the near future we will potentially need to go back to computers being the size of rooms again if the computational power requires that, no other way around it.
i don't think this is realistic but its just an extreme example.
I think a more realistic example would be if the LGA has to get bigger and to a point of being so big that everything else on the PCB has to accommodate then maybe we will see motherboards getting bigger and a new standard bigger than ATX full size.
https://en.wikipedia.org/wiki/Moore's_law
imo we will plateau soon anyway because we will hit a wall where the need for faster parts is diminishing, there is realistically only so much we can do i think we are at the pinnacle of that understanding.
Unless some software or hardware comes from far way out of nowhere or we start rendering 4D or there is a warrant for more then there is a ceiling of computational needs vs warranted need to justify making parts bigger and faster, we do only have finite resources to make the parts too which is diminishing as well.
We are also on a slower timeline than people think in most cases in the milliseconds when comparing how fast parts are to older ones and 5ghz base is really only a new thing we have not been here for long.
Do we really need to get faster at this time? imo not really a good application can run on computers that a 10-15 years old with no performance issues due to good codebase and in games context good rendering practice. Take World of Warcraft as a good example here in that, that game can run on anything because it is coded well, PC's from 20 years ago can still run it.
The clause for faster bigger parts is a vertical progression. Computers do go horizontal in progression as well but there is need for them to go more horizontally as opposed to vertical which is the more dominate. Money is also obviously the driving factor because computers are a massive conglomerate so the need for new tech is heavily biased because of money.
computers are the driving force behind so much industry and this is a big reason why they tell we need to keep getting faster better computers, its not really the case though.
we will go back and forth between bigger and smaller until the parts need to get bigger again. This has been the case since we started using the Gregorian calendar and the word computer was coined which was well before the first computer even existed.
imo 5ghz CPU's will also need to become the norm for gaming CPU's, not just having more than 4 cores. But processors like this even with current gen architecture need high quality heatsinks installed. Increased level 3 cache sizes do help as well, but it's generally better to have faster cores, more cores only matter if the programs you're using is able to use them properly, Minecraft can use multicore, but I've only seen multicore usage increase when chunk loading occurred, then after the chunk updates finish, it places the majority of the burden on a single core for some reason, even though I think things like redstone and entities/mob AI should be offloaded onto other cores where possible.
The problem is keeping CPU's heat below the tjunction for long periods of time, thermal stress is a big problem with overclocking but it can also become a problem over time with stock clock CPU's, it happens more slowly, but it's inevitable because the continuous heating of hardware creates wear and tear, so there is obviously a hard limit on how high clock speeds can go before it becomes too dangerous or impractical.
I agree that hardware will eventually have to increase in physical sizes again when more power becomes necessary, but in a video game like Minecraft as bedrock edition on Nvidia RTX GPU's shown, playable frame rates are possible on current gen hardware even with ray tracing enabled, although at 4k resolutions I think that's asking for too much at this time, a lot of games benchmarked today including Cyberpunk 2077 do struggle to be made playable no matter how good the video card is, I'm not saying it can't be done, but it is difficult to get there.
In my opinion the sweet spot for PC gaming as far as resolution goes is 1440p or 1080p, as it is easier to achieve playable frame rates with the settings cranked up and monitors with those native resolutions are generally more affordable.
1440p monitors did help even in Minecraft, I noticed the zigzag artifacts happened far less often on distant objects, making items like wheat fields and other crops look more defined and honestly I am happy with the monitor I have, even though I am sure a 4k display of the same size would be better, I can't wait to see what it looks like with ray tracing, which is my intention by the time I get an AMD 6700XT which I managed to pick up for a good deal on Amazon. Even if it's not playable with ray tracing, I can at least get some screenshots and short videos with it on.
yeah faster cores are better, multi core means nothing in many cases on a per application basis like most games really only need a few cores/threads not 20 or more.
The only time you really benefit from big multi cores is with the OS and if you want to run a gazillion apps at once
The first computers where analog engines with moving/rotating parts big in scale they started getting smaller and then bigger again.Then we got away from those and computers became electronic and they were the size of entire rooms which kept getting bigger and smaller so much so that they could eventually fit onto a table and then those table computers have also had a long history of going bigger and smaller and bigger again.
yeah like we already run them at boiling point of water now well their max TDP can go up to that point, it translates differently because its not water but thermal dynamics is also another area where its just going to get to impractical and we will hit a wall
we will need to have really big LGA and really big heat sinks to accommodate so there is a line drawn and definite ceiling.
I like my big full size ATX computers but i don't really want them getting much bigger and i also don't like small computers either just a personal preference like mini ATX is kind of gross even though i have all sizes ITX and nano too in my collection. I like all computers but i think ATX is big enough tbh.
Obfuscation is a really funny thing you know? because they give us their mappings, I can access the de-obfuscated code using mappings provided by Mojang.
Honestly though this game has been receiving updates for way too long and they need to make a new game.
Incidentally, the game supports up to 255 background threads for things like world generation:
At the same time, I've seen people complain about how long it takes to generate a new world - I don't use any multithreading at all and can generate one in 2-3 seconds, and in only half the time of vanilla 1.6.4 despite being so much more complex (even more so than 1.19, which does have twice the ground depth to generate but if they implemented it the way I did in my "double/triple height terrain" mods it wouldn't add much to the time* since simply filling in the data with stone is far less expensive than generating 3D noise/heightmaps, which were only being used for the "vanilla" terrain on top of the extended depth).
*Example, this was a Superflat world set to a "Mega Forest" biome and a depth of 127 layers, 64 more than default:
In TMCW Superflat worlds can have all normal features, including caves, as I use the same code to generate them as default worlds, except for the terrain:
Preset code; note the "cave" and "animal" options, the latter enables normal passive mob spawning during world generation so there will be 100+ mobs, not the 10 that can spawn post-generation ("village" and "biome_1" have no effect since they can't generate in Mega Forest, biome ID 41, this is actually just the "Overworld" preset changed from Plains and 64 added to the stone layer):
Note that caves are not scaled up to the new deeper ground, unlike in my "double/triple height terrain" mods, the highest are around y=95 with a few rarely as high as the surface, with most below y=64; scaling up caves would not have a big impact on generation time though, and generating all those tunnels is more expensive than a few much larger caves since the biggest cost is tracing out the path of every tunnel within a 12 chunk radius of every chunk being generated, not the actual carving within that single chunk:
Yes, it still only took 4 seconds to generate and load in the world - not just with twice the ground but massive trees up to 64 blocks tall on top of that! Even the latter alone causes major issues with many mods that add giant trees, and often just make their leaves fully transparent to avoid having to calculate lighting, but not in my case, which is actually faster despite having all normal lighting, which shows just how much more optimized my code/1/6/4 itself is (granted it used ti take about half a minute to generate a Mega Forest world on my old computer, but this was before any optimizations to lighting, or much else, and that computer had a CPU from 2005, hardly fast at all by current standards):
All this is also with only two main threads and no "worker" threads other than a file I/O thread to save chunks, a thread for the sound system, and the JVM's own threads for JIT and garbage collection (VisualVM shows around 16 active threads in total). By contrast, modern versions also use multiple threads for lighting and rendering (at least for chunk updates, not sure how many threads but even just one means the main thread doesn't have to do the work).
Also, I generated a second world with the render distance set to 16, which affects the size of the initial spawn area, which is the width of the render distance plus 1, or 35x35 chunks - even then it still took only 7 seconds to generate twice as many chunks as vanilla does (625):
Also, even at these settings the game still runs within a memory limit of only 512 MB, in contrast with the minimum of 2 GB for current versions, and I doubt that the average height of 1.18 is 192 blocks (ground plus terrain and features; this ore distribution chart shows that terrain rapidly becomes less common above sea level, with very little above y=128, equivalent to y=192 (note that coal becomes much more common above y=128 on the last chart but the first one is the actual number of blocks and there is only a very small peak, showing how little actual terrain there is up there):
An analysis of the world, which includes 131 different blocks (including all variants), 672 entities, and 123 tile entities; there were also more than 3.1 million leaves (161:0 is Mega Tree leaves); the average chunk size was 9931 bytes, which is larger than the average given for a 1.18 world, 8910 bytes (chunk size is mainly dependent on the overall complexity, even THT averaged less despite being deeper since it only had vanilla 1.6.4 blocks, surface terrain/features, and overall composition):
(0:0),Air,38411562
(1:0),Stone,30340095
(1:1),Stone,1030714
(1:3),Stone,1088978
(1:5),Stone,859173
(2:0),Grass,297920
(3:0),Dirt,2617554
(4:0),Cobblestone,2154
(4:1),Cobblestone,399
(5:0),Wood Planks,9849
(7:0),Bedrock,313600
(8:0),Water (active),49740
(10:0),Lava (active),69499
(12:0),Sand,471
(13:0),Gravel,473346
(13:1),Gravel,483326
(14:0),Gold Ore,10529
(15:0),Iron Ore,109074
(16:0),Coal Ore,391338
(17:0),Wood,248563
(17:4),Wood,378
(17:8),Wood,361
(17:12),Wood,200793
(17:13),Wood,4341
(18:0),Leaves,460654
(18:1),Pine Leaves,48698
(21:0),Lapis Lazuli Ore,4801
(30:0),Web,1992
(31:0),(Unused Shrub),26255
(37:0),Flower,185
(37:1),Flower,32
(37:2),Flower,16
(37:3),Flower,42
(37:4),Flower,11
(37:6),Flower,13
(37:7),Flower,2
(37:8),Flower,8
(37:9),Flower,24
(37:10),Flower,43
(37:11),Flower,47
(37:12),Flower,21
(37:13),Flower,35
(37:14),Flower,40
(37:15),Flower,11
(38:0),Rose,170
(39:0),Brown Mushroom,350
(39:1),Brown Mushroom,79
(39:2),Brown Mushroom,88
(39:3),Brown Mushroom,90
(39:4),Brown Mushroom,117
(48:0),Moss Stone,1800
(49:0),Obsidian,2785
(51:1),Fire,1
(51:5),Fire,1
(52:0),Monster Spawner,60
(54:2),Chest,16
(54:3),Chest,15
(54:4),Chest,16
(54:5),Chest,16
(56:0),Diamond Ore,2512
(66:0),Rail,1076
(66:1),Rail,903
(66:2),Rail,3
(66:5),Rail,2
(66:6),Rail,1
(66:7),Rail,1
(73:0),Redstone Ore,19912
(75:1),Redstone Torch (off),46
(75:2),Redstone Torch (off),52
(75:3),Redstone Torch (off),57
(75:4),Redstone Torch (off),51
(82:0),Clay,23
(83:0),Sugar Cane,2
(85:0),Fence,6934
(91:4),Jack-o'-Lantern,5
(98:0),Stone Bricks,1162
(98:1),Mossy Stone Bricks,342
(98:2),Cracked Stone Bricks,236
(98:3),Circle Stone Bricks,54
(99:1),Huge Brown Mushroom (Northwest),7
(99:2),Huge Brown Mushroom (North),4
(99:3),Huge Brown Mushroom (Northeast),7
(99:4),Huge Brown Mushroom (West),4
(99:5),Huge Brown Mushroom (Top),1
(99:6),Huge Brown Mushroom (East),4
(99:7),Huge Brown Mushroom (Southwest),7
(99:8),Huge Brown Mushroom (South),4
(99:9),Huge Brown Mushroom (Southeast),7
(99:10),Huge Brown Mushroom (Stem),10
(99:11),Huge Brown Mushroom,10
(100:5),Huge Red Mushroom (Top),1
(100:10),Huge Red Mushroom (Stem),6
(100:11),Huge Red Mushroom,9
(106:1),Vines,322
(106:2),Vines,328
(106:4),Vines,305
(106:8),Vines,330
(161:0),Future Block!,2668518
(162:0),Future Block!,1251
(164:1),Future Block!,6
(164:2),Future Block!,7
(164:3),Future Block!,6
(164:4),Future Block!,7
(164:5),Future Block!,38
(164:6),Future Block!,7
(164:7),Future Block!,6
(164:8),Future Block!,7
(164:9),Future Block!,6
(164:10),Future Block!,8
(165:1),Future Block!,3
(165:2),Future Block!,4
(165:3),Future Block!,3
(165:4),Future Block!,4
(165:5),Future Block!,26
(165:6),Future Block!,4
(165:7),Future Block!,3
(165:8),Future Block!,4
(165:9),Future Block!,3
(165:10),Future Block!,9
(175:0),Future Block!,84
(175:1),Future Block!,72
(175:2),Future Block!,9614
(175:4),Future Block!,75
(175:5),Future Block!,38
(185:0),Future Block!,729
(185:8),Future Block!,1640
(186:0),Future Block!,476
(186:8),Future Block!,1252
(187:0),Future Block!,45
(187:8),Future Block!,153
(200:0),Future Block!,461
,,
,<Entities>,672
Bat,Bat,18
Bear,Bear,10
Chicken,Chicken,114
Cow,Cow,104
Creeper,Creeper,20
Enderman,Enderman,2
Endermite,Endermite,1
Item,Brown Mushroom,10
Item,Rail,1
Item,Seeds,1
Item,String,1
Item,Unknown Item 175:0,1
MinecartChest,MinecartChest,25
Pig,Pig,124
Rabbit,Rabbit,53
Sheep,Sheep,144
Silverfish,Silverfish,2
Skeleton,Skeleton,16
Slime,Slime,2
Spider,Spider,6
Squid,Squid,1
Zombie,Zombie,16
,,
,<TileEntities>,123
Chest,Chest,63
MobSpawner,MobSpawner,60
TheMasterCaver's First World - possibly the most caved-out world in Minecraft history - includes world download.
TheMasterCaver's World - my own version of Minecraft largely based on my views of how the game should have evolved since 1.6.4.
Why do I still play in 1.6.4?
That's with default textures though, I am pretty sure with HD texture packs and ray tracing the memory requirements would increase well above 512 megabytes according to the version of the game you loaded or tested. High quality shaders and dynamic lighting are demanding even in bedrock edition.
That's not getting into how much more would be required if say somebody decided to use a render distance greater than 16 chunks,
or alternatively 256 blocks from player position. 160 + 96 is 256 right?
It's no secret that Minecraft uses a lot of memory to run, if it wasn't then 2gb wouldn't be the default setting in Java edition.
and the more memory the game uses the worse it'll perform, because as you said the CPU has to do garbage collection to remove unnecessary or useless information. It only makes sense for the game to store what is required at that time whether that be loaded chunks or player inventory etc, which gets written to a hard drive or SSD so players keep their progress in the game.
I'm no expert but I have learned that CPU usage goes down if loading from a disk instead of generating new terrain,
that is the reason why some people like to pregenerate their worlds, which I think is a waste of storage space unless you know for certain you'll be exploring that much of a world in your lifetime. It only makes sense to generate what's needed, adding additional writes to an SSD is not only wasteful, it can shorten the life expectancy of the storage drive if that data constantly gets overwritten. Pregen worlds aren't even allowed on bedrock edition officially.
I do wish that bedrock edition had a world border system though.
I would like to use one to prevent the save file getting excessively large.
I'd like to limit mine to 80,000 x 80,000 blocks, including other dimensions, to stop the world save file from exceeding the capacity of my 500gb SSD on the server.
And this claim is invalid because "default" settings include 16x textures, no shaders (which Java doesn't even support without mods), and a render distance of 12 chunks, which is only about half the area of 16:
Fun fact: the default render distance has always been 12 chunks, except old versions called it "Far" (for a time I thought it was 16 but the source, for both 1.6 and older versions, limits the width of the visible area to 400 blocks, or 25 chunks, the same width a render distance of 12 gives, or 12 * 2 + 1).
Even if I added the ability to set it higher it would still have no effect on the normal requirements, which are incredibly low by modern standards; as I said before, I've added hundreds, if not over a thousand features yet the game is no more demanding than vanilla 1.6.4, a very old a lightweight version of the game which would run on 20 year old computers, and this is in favor of vanilla because it is loading less than half as many chunks:, yet somehow it uses just as much memory and gets less FPS:
Also, my computer is hardly new at all - in fact, the CPU came out 11 years ago and I can see at least double, if not more, the performance on the latest hardware, even if not top-end:
https://en.wikipedia.org/wiki/Ivy_Bridge_(microarchitecture)
Launched April 29, 2012; 10 years ago
This is all you needed to run 1.6, according to Mojang themselves - and literally the only addition since that has impacted resource usage is the deeper underground (this is also why Amplified says/said it needs a "beefy computer", yet my modded "extreme mountains" biomes surely have to be more extreme even if locally, but large enough to fill the entire loaded area):
This is a comparison of the recommended CPUs to what I had in my old computer - which outperformed them by a factor of about 2 and even single-thread performance was significantly better despite running at only 2.2 GHz (they probably did just mean "Athlon 64" and not "Athlon 64 X2" given the difference in performance):
https://www.cpubenchmark.net/compare/Intel-Pentium-D-805-vs-AMD-Athlon-64-4000 -vs-AMD-Athlon-64-X2-Dual-Core-4200 /1125vs73vs79
I also had no issues with 3 GB of RAM, with about 1 GB still free while the game was running (I did have some memory issues due to having a 32 bit OS, which limited the per-process usage to about 1.5 GB, but that was resolved by allocating less memory, as I've shown before 512 MB is more than enough and back then Optifine actually recommended allocating even less, just 350 MB; I don't know who started it but the "more RAM = more FPS" myth is just that):
Again, as I've said countless times (I'm getting really tired and frustrated) the issues with newer versions are due to unimaginably bad coding practices - sp614x is absolutely correct when they claim that 90% of the memory they use / garbage creation is completely unnecessary (even if they only created 10x as much memory churn the JVM still needs more memory so the garbage collector doesn't need to run so often):
(though I can't imagine how it can be "easier" for the developers when you have literally 10x as much code complexity as well)
Also, imagine a render distance of 1024 chunks - no, not blocks, chunks:
https://www.reddit.com/r/Minecraft/comments/vwwbge/minecraft_with_1024_chunk_render_distance/
Over 4 million chunks, 1 billion blocks per layer... such is the incredible feats the game could perform if properly optimized and using modern rendering methods like level of detail, which is the most important factor here (past some distance even an entire chunk will be too small to make out, so why even render it at full detail, even multiple chunks could be rendered as a simple cube with a single blended color, not even an actual texture - this is exactly what most games do, but not Minecraft - it is otherwise impossible to render that much on any hardware, maybe requiring terabytes of VRAM. If you watch the video you can see the detail increase as they get closer).
TheMasterCaver's First World - possibly the most caved-out world in Minecraft history - includes world download.
TheMasterCaver's World - my own version of Minecraft largely based on my views of how the game should have evolved since 1.6.4.
Why do I still play in 1.6.4?
I said the memory requirements increase if texture packs and shaders are used, not the default x16 textures which are as you mentioned.
If you use the default setting, which implies no mods, just vanilla Minecraft then the memory requirements should not be very high at all, but unfortunately because of bad optimization of the more recent versions, it can.
1024 chunk render distance if it could be achieved in the vanilla game, which has nothing to do with shaders or texture packs, if it could be done with minimal impact on performance, would be an incredible feat I agree with you, but it hasn't yet been done in vanilla Minecraft, with the maximum official supported render distance supported in 64bit Java being 32 chunks without mods. https://minecraft.fandom.com/wiki/Options
I said before I would stay with a limit of 64 chunks render distance on my worlds if I could do it without any major performance hit, As Princess_Garnet pointed out before, seeing an excessive number of biome transitions becomes a problem when you crank up the render distances too high. Even if performance weren't the issue, I could be satisfied with 64 chunks which is still quite a lot of blocks from your position, 1024 blocks, with this I have been able to see End cities in the End on the outer islands without even having to bridge the void first.
You've got a good point about not rendering the distant chunks at full detail, that would be a great way to get the excessive hardware usage down, minimize or reduce the rendering/texture detail of objects most distant from the player, also reducing the number of entities being simulated or ticked in regions furthest away from a player would help reduce CPU load as well, not just render distance, but the ticking radius which affects redstone, mob AI, crop growth and what not. All of these things should be cut back on when the player is nowhere near them.
Does a player really need a fully automated redstone contraption to work at 128 blocks away? keep in mind bedrock edition supports a simulation distance of up to 12 chunks which is 192 blocks. Unless it's something like a minecart with chest, which may in some circumstances be useful in sending items to other players on multiplayer servers, the amount of processing redstone is allowed to create on any given world should be cut down.
This is because the cause of that is ultimately "not enough resolution to resolve the final result without side effects", for a lack of better words. It's what often causes the "moire pattern" effect.
This same reason is a big part of why very far render distances are in diminishing returns territory for me. It's not just a performance thing. It's not even just a "too many biome transitions visually close together" thing either. It's simply too much information in too little amount of space. Once you get to higher distances, you're squeezing so much information into a smaller and smaller portion of (relative) screen space. It's diminished returns. I feel like this is already the case at "just" a render distance of 32 chunks. I feel like the 24 to 32 range would be the ideal range for me, but I'm not saying higher has no benefit ever. With elytra and at high altitudes especially, it may become more worthwhile. But on the ground? Underground? In the nether? It's very much into that "wasteful" territory of diminished returns for me.
And in some cases (like in the end), I find it would even be preferred not to have a high render distance. Like, I wouldn't want to see the outer parts of the end from the central island. It is supposed to feel isolated from the rest. Render distances allowing me to see the outer end from the center island would break the immersion of that part of the game for me (especially prior to defeating the dragon, but even after that). Same for spotting multiple end cities from so far. I feel that takes away from some of the immersion and some point of the game. But I realize that's my own preferences and some people might not mind those things.
In real life, the curvature of the planet and "biomes" not being so small makes far render distances in the game less appealing too. And I know, I know, games shouldn't mimic real life, but I find it's still a reason why I find far render distances unappealing after a point.