Wrong. If you didn't have a video card at all the i7 would still be better because it would have better onboard graphics. Plus you can't possibly run minecraft without a processor so your arguement is invalid.
Holy crap, what are you actually trying to say right now? When it comes to Minecraft the video card is the most important part. Obviously the i7 would be better with no video card sure, but with no video card at all both setups will run like absolute crap. I can't tell if you have problems reading in your life that you may need to see a doctor about, but you are just saying stuff with no idea what you're talking about.
i have a laptop, during Minecraft 1.2.5 it ran minecraft and a 6 player server. Today it can't even run minecraft,
All components are important for minecraft. Although you can get a weak cpu and a med gpu and perform the same as a great gpu, and a great cpu. Although all that matters is the game runs at 60fps.
Rollback Post to RevisionRollBack
Proud member of the MCF AWA war of '13! if someone suggests Alienware or Cyberpower, wait for a custom-built list from someone who knows their stuff. Meh Rig
The CPU really only does the chunk loading and the AI of the mobs, and also updates a single block per tick.
I feel like I just read an 8 year old trying to explain how nuclear fusion works.
Fun fact: that's not even remotely what the CPU does, the CPU does literally anything not related to visuals. It plays music and sound, it does the lighting updates, it does the AI updates, it does ticks on all entities that are live as well as interfaces with the hard drive to save and load chunks. It also handles input, networking, pretty much the only thing it doesn't do is rendering.
I don't really see what this has to do with anything anyway, the CPU does all those things in every game that uses them, that doesn't change if a game is CPU or GPU bound, that depends on how much utilization each thing has.
The RAM (Which is another common misconception) only stores each chunk and the mobs in that chunk. What's more, as soon as you move out of rendering distance of a chunk, the chunk is unloaded automatically to clear up RAM. Therefore, vanilla Minecraft on far render distance doesn't really even need 512MB allocated to it.
It holds all data used during runtime, but yes, that's not very much. It used to require a lot more because there were bugs duplicating loaded data like textures.
The CPU and GPU do completely different tasks and are laid out as such, in terms of raw processing power a CPU completely destroys a GPU, but that's because a GPU is a dedicated processor designed to do one type of task very quickly. Saying it has the highest workload is a joke, its completely situation dependant.
There are 2 options for it. You can have Advanced OpenGL off, in which case it would have to render each individual block in render distance from the top of the height limit to the void, which, once you think about it, is a huge amount of pixels. You can also have Advanced OpenGL off, in which case it would only render the blocks that you can see and none that are blocked. In which case it would have to do massive calculations to determine if a blocked or not for all of the blocks within render distance. Think about that for a second.
I'm glad you seem to think you know so much about occlusion culling, on my video card it actually makes performance worse.
If that's not enough, then tell me why my friend's build with an i3-2120 and a 7970 GHz edition maxes out the video card but only uses 62% of the one thread on the CPU which it's running on.
I don't get why people get something like 300 fps and think that's a normal comparison to how a video card and CPU interact. You seem to be under this silly false impression that a video card is going to render a certain amount of frames and just chill out, and if its running at full throttle then it must not be able to keep up. Unless you cap it, it'l happily render 30k frames a second even on a pong game.
It's not able to keep up because its always trying to render more frames while the game logic ticks only happen as often as time passes.
Let me reword what I said, the only INTENSIVE tasks the CPU does are AI, chunk loading, and ticks on blocks and mobs. Input, networking, sound, pretty much everything else uses so little CPU power it really doesn't strengthen your argument to use them as part of it.
Except you're still wrong, lighting calculations are one of the bigger slowdowns in minecraft. I don't think you seem to understand how simplistic the AI in minecraft is, and on top of that I wouldn't even remotely call chunk loading an intense CPU process, the main reason it takes so long is... like usual, the hard drive.
Fun fact: Loading and playing sounds in Java takes 20 lines of code. Defining each keyboard key event takes 5 lines of code. Networking only takes several hundred lines of code. Now, when you compare that to Minecraft itself, which is composed of probably several hundred thousand lines of code...I think my point explains itself.
I don't know whether to consider what you just said completely ignorant or just completely stupid.
Let's start at the beginning: first off lines of code are not indicative of performance. It's an old programmers saying that 95% of the work is done in 1% of the code(or something variation of numbers.) The fact you don't know that makes it very clear to me that you don't actually understand whats going on behind the scenes of things that happen.
Second, if we were to go off lines of code minecraft has a microscopic amount of code compared to some of the libraries it calls, I don't think you understand how much code is actually involved in decoding and playing a sound or dealing with a packet. Most of that functionality is hidden away in libraries and yet the computer still has to execute every single line of that code.
Thirdly, most of minecraft's code is simple instructions, like most programs. Programs are quite literally directions on how to do things and unless the code is specified elsewhere you have to write every single bit and bob yourself, that can be as simple as a line that takes one cpu cycle to complete that sets one variable to a certain numbers, or it can be a loop that draws every object in the game through abstraction.
Honestly I find it rather insulting that you try and act like you know so much when you clearly don't seem to understand the sheer amount of code behind those "3 or 4 lines to do something." Things like OpenGL or DirectX are massive drivers utilizing a ton of hardware functionality optimization, code is counted in what it does, not how many lines it does it in.
I would have thought it quite obvious. I'm comparing the processing load of Minecraft to the processing load that the CPU can handle.
You do realize that a CPU is running your entire computer right, it's handling everything from decoding and playing sound in multiple windows, mixing it, handling every single input from your mouse when you move it a single step, networking, the entire operating system and all its drivers and context changing. Your CPU is doing more work in a second than you probably will in your entire life. For a CPU to be bogged down to max from a game is saying something about the massive number of calculations it is doing. Ontop of that when you consider that every year or two processor power almost doubles you're talking about hundreds of billions of instruction capability difference over a short period of time.
Minecraft isn't even built to be large scale and it still has a ton of processing to do simply because its a game, a sandbox game that has to operate and tick a lot of things in a dynamic environment. It's not any different from the video card having too much to render to maintain 60 fps.
A quick Google search yielded nothing about "data like textures". And if it used to happen, what's the point of saying it now?
To point out why it used to use so much memory? Stop being defensive, why did you even bring it up if you're just going to brush it under the rug when it sounds like I have half a clue what I'm talking about? If it doesn't matter than either say it doesn't or don't bring it up at all. If you want me to be more specific it was duplicating textures everytime it binded them for opengl rendering, which is why using large texture packs ended up taking up gigabytes of memory.
I'll give you that what I said was horribly worded. I meant that the GPU does the most work relative to what it's able to do. (Still worded badly). A GPU is very often using all of it's resources to render the graphics, whereas a CPU may only be using half of the core which Minecraft is running on. This does depend on the GPU and whether the CPU is too weak for the GPU, but most consumers don't have a CPU bottleneck.
Most people I've seen that have FPS problems on Minecraft have a CPU bottleneck, not a GPU bottleneck. If they don't its usually because both of them are terrible, so its a mutual problem.
You realize the CPU is always doing something too right? Your video card is spending all its time trying to put things on your screen and the CPU is always doing something either related to your program or to other programs and the operating system. It's not like it sits idle and does nothing. To me it seems like you don't understand the relationship with a CPU to a GPU anyway, you actually will always gain more FPS by upgrading your CPU. The faster it finishes all its updating the earlier the video card has to begin rendering a frame, a bottleneck is simply a case where the cpu is taking an inordinate amount of time(most of a second) to process all the data, thus starving the video card of being able to do anything but wait for its chance to get up and start drawing.
And BTW, in terms of raw processing power, a GPU stomps on a CPU. I don't know where you got the idea that a CPU is stronger than a GPU when a mid-ranged GPU is capable of 1-2 TFLOPS whereas a high-end CPU is capable of 300-400GFLOPS.
Yes except a CPU does instruction decoding and quite a wide variety of operations compared to a GPU. If you knew what you're talking about you would know that you're talking about floating point operations per second, GPU's are designed heavily to do quick and simple floating point operations per second. A GPU also doesn't use the same architecture as a CPU at all, it has a ton of really small and simple processing cores whereas a CPU uses large monolithic ones. In terms of performance a CPU core isn't even in the same plane of existance as GPU processor cores, it's like comparing an aircraft carrier to a fleet of biplanes. There's just a hell of a lot of biplanes.
It does on mine too, and I think I already explained why. Having it off is simple and straightforward, simply render a ton of blocks. Having it on allows it to render less, but either the CPU or the GPU needs to do many more calculations in order to determine the blocks that are visible and only render those. On a high-end system, such as mine and yours (I assume), The GPU(s) is strong enough to render all of those blocks at a good framerate. The cost of doing those calculations for occlusion culling is more than the benefit, so your framerate drops. That's why I've found that OpenGL is only good in systems where the GPU is a major bottleneck, such as systems using Intel HD graphics, so the CPU can take on much of the load and give less to the graphics card.
The computer I'm on right now is -not- high end, it's five years old actually. It has to do with a combination of the drivers, support on the particular video card and how minecraft actually utilizes the occlusion culling. Without going into much detail, it usually sucks. In fact for most people I've seen the "advanced opengl" is more like "hurr de durr opengl."
By cap, what do you mean? If you mean VSync, that's irrelevant, as we're talking about framerates in the hundreds.
First off I don't know what vsync has to do with hundreds of FPS. Secondly vsync is just clamping you FPS to your monitor refresh rate, you can cap the rendering using any amount of time. Some games do. People just tend to call it vsync for simplicity. In fact if you change minecraft's goofy as hell render settings it basically puts various caps on your framerate.
If you mean capping the clocks of the card, of course you are right. I know perfectly well that a GT 610 is capable of running Crysis 3 at a million fps if heat problems didn't exist so clocks weren't capped.
I wasn't talking about that, though broaching that complex topic. Heat is not the only reason a card cannot operate at a certain frequency. It has to do linear operations in a lot of places so it ends up waiting on all its processors to finish dealing with pixels per frame, and things like that. It's not a simplistic process. On top of that clocks cause problems due to the lag time in signal propagation usually.
But I won't go into that crap, thats hardware problems that would take up half a page.
The problem is, they do, and that's what causes the caps on the card. By 100% usage, I simply meant that it was running 100% of what the card with those caps would allow. I didn't mean that the card is being completely used to 100%. I don't see what your point was.
If you don't mean that the card is constantly doing operations and thus running at 100% compared to how you seem to think the CPU isn't.. then.. what is it you mean? I don't understand.
Addressing the other part, about how the card is trying to render more frames in the time of a single tick: let me tell you something else about how Minecraft works. The game ticks are set to loop at a constant rate of 20 ticks per second. On a machine with a slower CPU that can't handle that, the game isn't running at that amount because the instructions can't execute a full loop in a 20th of a second. In which case, of course the weakest part is the CPU and so you would consider upgrading the CPU to be the most important. However, on a computer equipped with a decent CPU, it can run much more than 20 ticks per second, but the code locks it to 20. Because of that, a CPU that can perform the 20 ticks per second (Which most mid-ranged CPUs can do) is already enough. Which is why I say that the CPU is not important as long as it hits that level.
Now, take the graphics card. The graphics engine sets no limits as to how many fps the graphics card renders. Achieving a high fps would really only need a fast graphics card, of course, and a CPU that can process the information for the graphics card and send it fast enough. Once again, that's a relatively low load for the CPU. Which is why I say the graphics card is under more load than the CPU. You basically even said it yourself: The graphics card is trying to render more frames, while the CPU is not doing much once it hits the point where it can perform 20 ticks per second.
As much as I love you telling me how you think things work I should tell you that you're using a massive simplification. Most game loops don't block when time hasn't passed, even if your CPU can make that 20 times a second it simply loops indefinitely doing one draw call at a time and stopping to process a tick when enough time has passed.
I.e. The video card starts rendering the next frame as soon as the CPU finishes a tick, if that tick takes 5% of a second that means for 5% of that second the GPU is sitting there waiting patiently(in theory anyway, in reality its probably drawing something still.) if the CPU takes 1% of a second that means the GPU can start rendering that much earlier, meaning you'll get more FPS.
A CPU "bottlenecking" a card is a bit of a misnomer, they're always "bottlenecking" each other. You can still fail to render 60 FPS due to the CPU being slow even if it met that time quota of 20 times a second. The relationship is directly proportional to how long each thing takes to do its job.
Alright, I forgot lighting calculations. I'll give you that much. Even so, I don't think that's much of a load on the CPU.
And turns out I made a mistake on my first post. 3 blocks are updated per tick, not 1.
Well they are, do you really think doing AI for something like 30 mobs is more intense than calculating light for thousands of blocks? I'm not sure why I even have to explain this, you can profile the code and see where it spends most of its time.
Decoding a sound file does not take as much CPU power as you think, and neither does detecting keyboard events.
That depends on the type of sound, decoding certain compressed sounds or things like video files take quite a bit of time. Enough to have choked early processors. Of course these days when processors can do billions of operations a second they don't have much trouble keeping up. Performance reduction is additive, it takes a lot of things to cap your CPU.
While saying several lines of code is ignorant, I admit, you implying that decoding sound files and detecting keyboard events is intensive on the CPU is just as ignorant.
Depends on what you mean by intensive, most of the time spent on game processing is doing a simple task a couple thousand times like lighting updates, doing sound decoding and playing is much more performance hungry than a few of those updates but certainly not even a fraction of doing all of them together.
I know that the amount of lines a task is written in is not fully representative of the CPU power needed to do a task. However, since most games run in a single large game loop, if a method has more lines of code of equal complexity to another in that game loop, it will be more intensive than the other.
That's really not true, libraries hide a lot of code to where saying one thing can mean doing something completely different. Most of the time what slows down games is doing things like memory copying or accessing the hard drive, simple arithmetic operations are pretty fast. I remember watching a game dev conference speaker that does professional game profiling say he once improved the overall performance of a game by like 30% simply by changing how it dealt with a container object. Copying and throwing out objects or copying them in a non-efficient way slowed the entire game down that much.
OpenGL and DirectX are both ideally GPU-accelerated. Even if the CPU does use much of the processing power on it, the majority of the load goes to the GPU.
It's still code, a GPU is a microprocessor and uses code to accomplish all its operations, just like anything else in electronics that has a controller.
I completely fail to see your point here. I was comparing the load of Minecraft and just Minecraft, and you brought in the operating system.
In fact, they shouldn't even be directly related to each other. Most of the operating system tasks would be running on 1 core, while Minecraft would be running on another.
It doesn't usually work like that, threads are constantly context switching, Minecraft likely get picked up by a few cores unless the OS decides it should exclusively do its work on one(something you can change.)
Cores are also a portion of the CPU so I don't see where my point is lost, do you really think one core is doing nothing but running minecraft? Hahah no. There's probably tens of thousands of lines of background kernal processing running every second, heck it has to do scheduling and all those things regardless. My point was that there is more contributing to overloading a CPU core than just Minecraft. Which I admit I went off on a tangent about a bit.
A lot of processing simply because it is a game? When I play Pong, does it have a lot to process simply because it is a game? For every game, there is a maximum amount of CPU processing power needed, not counting the amount needed to drive the graphics card. If the CPU has more processing power than is needed, then some of it will not be used.
It will be used on something else, CPU time isn't reserved the CPU just does as much as it can before it runs out of time to do it. A game like Pong has little to process and little to draw, so it is going to pump out thousands of FPS in a second.
"Used to use", not "use". I was talking about the present. You were talking about the past.
Except it doesn't use that much memory it RESERVES that much memory, something that Java requires you to do. If you actually watch your memory in usage while playing it does not go that high now. I don't really get your point or how this relates to CPU/GPU time myself.
The CPU won't change your framerate if your GPU can't render any more frames.
That's completely wrong and fundamentally flawed, it will always give you more frames. People test that all the time and even post it in benchmarks. However in a case where the system is really bottlenecked it is like I said, they might only get 2-3 more FPS with a CPU upgrade than a video card upgrade, because the linear curve of it means the CPU is only letting the GPU begin work a tiny bit earlier due to how fast the CPU already processes its workload.
Which is usually the case with Minecraft. Download a GPU monitoring utility such as GPU-Z and also open Task Manager's performance tab, and run them on a secondary screen while you play Minecraft on your first. You'll see that the GPU is constantly at 96-99% load, while the CPU is not nearly as high. To me it seems like you are simply either overestimating GPUs or underestimating CPUs, because in a typical system, a CPU can update the information that needs to be displayed on the frame much faster than a GPU can render that frame.
To me you still seem to be ignoring what I am saying and not understanding the fact that the GPU is always going to be running at near 100% load unless you cap your framerate, because the GPU has a set number of tasks to do just like the CPU but it will do them an -infinite- number of times until the CPU next has to process. It will ALWAYS be running at full capacity.
I thought you were talking about RAW processing power, not what it can do with that processing power. You contradicted yourself.
Its not really a contradiction to compare CPU cores to GPU cores, of which I said, CPU cores are absurdly more powerful. GPU's get most of their overall FLOPs count from a lot of simple processors working in parallel.
As long as it's a high-end system, no matter when it was from, occlusion culling should suck. I've found that the degree that occlusion culling increases or decreases your fps is related to your CPU compared to your GPU. If your GPU is well balanced with your CPU, occlusion tends to lower your framerates; if your GPU is underpowered for your CPU, such as an i7-3770K using onboard graphics, occlusion tends to raise framerates.
Found that where? A lot of games use occlusion culling built in because its supposed to be a performance increase, its debatable if it works in Minecraft due to a lot of variables. I've never really seen many issues with it unless its doing something like minecraft probably is where its likely checking a ton of geometry and only removing a small portion of it, thus wasted processing time.
I meant that the card is operating at 100% of what it's allowed to run at. Since you brought the CPU into this, I'll try to answer to the best of my understanding of your question:
From my observations, usually the CPU hovers around 30-70% on the single core that Minecraft is running at. The usage of the CPU depends on the what CPU it is; a particularly crappy one will go 100%, but a mid to high-end one runs around 30-70%.
Fun fact, CPU cores don't sit idle, with windows at least they are always doing some background processing that the OS needs to tidy files up, the task scheduler does that. It will say your CPU core is using 2% if it is using 2% of its per second available time on tasks other than system tasks. Your GPU would display itself not running at 100% as well if you cap your framerate because it won't be issued draw commands. In a game like Minecraft it will always keep issuing a new one, so again your video card is going to be running at max constantly.
On the other hand, the graphics card seems to always be close to 100% usage regardless of the model, unless the CPU is actually too slow to run the card effectively. The thing that changes based on the card model isn't the usage, but instead is the framerate.
Processing and sending the draw calls aren't the most intensive task the CPU does; In fact, relative to some of the other things the CPU does, the draw call actually is unintensive.
The tick is locked at a max of 20 per second. By what you said in your second paragraph, shouldn't that mean that the FPS is locked to 20? Instead, inn between 2 ticks, both the graphics card and the CPU are still doing something: specifically, the CPU is processing what should happen between those 2 ticks, mainly minor things such as entity movement, and making draw calls for them. That's a small load compared to what happens in an actual tick. If you meant that in that 5% of a second, the CPU finishes processing the information for next frame, that makes more sense, but I would expect that to be more in the region of 1% of a second, not 5%.
It was just an example, the point is that the CPU can still affect the FPS by its speed even if it is making the 20 ticks a second, in fact you can gain or lose frames even if it isn't meeting that number because the CPU while being slower than it needs to be in order to maintain 20 FPS, may still be running faster than it would in order to maintain 5 FPS or something. Again, the relationship is linear, if you have a stronger video card it will render faster and if you have a stronger CPU it will do ticks faster, thus giving the other thing more time to accomplish its task. Slow or not.
I really don't see how they can always be bottlenecking each other. From what I see, they are either balanced well, or one is bottlenecking the other. As for failing to hit 60fps because of the CPU, theoretically, that's only seen when the CPU can barely hit 20 ticks per second and doesn't have the extra processing power to tell the graphics card what to draw in those 60 frames. Most modern CPUs are above that.
What I mean by bottlenecking is that they're always waiting on each other, the time that they are isn't totally linear and changes based on how much processing each has to do, but in general you see some kind of performance relationship happen. In a real "bottleneck" situation the CPU is probably taking 10% of the time to process everything and the GPU is taking 90% in that case you obviously should upgrade your video card for the biggest gains.
-snip- the main reason it takes so long is... like usual, the hard drive. -snip-
I don't think that's true. I've tried minecraft on an SSD, a 5400rpm HDD, 3 different 7200rpm HDD's and a ramdisk and none of them made any difference to chunk loading times.
I would say CPU and GPU are both important in minecraft, it's kinda difficult to say which one makes a bigger difference. I would probably put the GPU slightly ahead of the CPU in terms of importance though.
I upgraded to a 6870 from a 5670 and got a big gain to fps in minecraft. Then I upgraded from a 3.3ghz core 2 duo CPU to a stock 2500k and it made about the same jump in fps. Kinda hard to tell which one was more important.
I made this post when I was just 12 years old, and have now stumbled across it after an late night rabbit hole journey.
This was an incredible laugh reading about my awful at home school computer that melted because I forgot to plug the CPU fan back in. the CPU was in fact 1.06 ghz and the PSU was indeed a 150w, It was a short form factor dell OptiPlex that was archaic when I got it. I did eventually build a proper desktop when I was 15 and had a job, but I never did get my windows XP back.
I am now happily married and when ever my wife isn't looking I hop on Minecraft for a bit.
god my spelling was atroshu... atrocius, atro... bad, god my spelling was bad.
Holy crap, what are you actually trying to say right now? When it comes to Minecraft the video card is the most important part. Obviously the i7 would be better with no video card sure, but with no video card at all both setups will run like absolute crap. I can't tell if you have problems reading in your life that you may need to see a doctor about, but you are just saying stuff with no idea what you're talking about.
i have a laptop, during Minecraft 1.2.5 it ran minecraft and a 6 player server. Today it can't even run minecraft,
All components are important for minecraft. Although you can get a weak cpu and a med gpu and perform the same as a great gpu, and a great cpu. Although all that matters is the game runs at 60fps.
if someone suggests Alienware or Cyberpower, wait for a custom-built list from someone who knows their stuff. Meh Rig
I feel like I just read an 8 year old trying to explain how nuclear fusion works.
Fun fact: that's not even remotely what the CPU does, the CPU does literally anything not related to visuals. It plays music and sound, it does the lighting updates, it does the AI updates, it does ticks on all entities that are live as well as interfaces with the hard drive to save and load chunks. It also handles input, networking, pretty much the only thing it doesn't do is rendering.
I don't really see what this has to do with anything anyway, the CPU does all those things in every game that uses them, that doesn't change if a game is CPU or GPU bound, that depends on how much utilization each thing has.
I don't even know what that's supposed to mean.
It holds all data used during runtime, but yes, that's not very much. It used to require a lot more because there were bugs duplicating loaded data like textures.
The CPU and GPU do completely different tasks and are laid out as such, in terms of raw processing power a CPU completely destroys a GPU, but that's because a GPU is a dedicated processor designed to do one type of task very quickly. Saying it has the highest workload is a joke, its completely situation dependant.
I'm glad you seem to think you know so much about occlusion culling, on my video card it actually makes performance worse.
I don't get why people get something like 300 fps and think that's a normal comparison to how a video card and CPU interact. You seem to be under this silly false impression that a video card is going to render a certain amount of frames and just chill out, and if its running at full throttle then it must not be able to keep up. Unless you cap it, it'l happily render 30k frames a second even on a pong game.
It's not able to keep up because its always trying to render more frames while the game logic ticks only happen as often as time passes.
Except you're still wrong, lighting calculations are one of the bigger slowdowns in minecraft. I don't think you seem to understand how simplistic the AI in minecraft is, and on top of that I wouldn't even remotely call chunk loading an intense CPU process, the main reason it takes so long is... like usual, the hard drive.
I don't know whether to consider what you just said completely ignorant or just completely stupid.
Let's start at the beginning: first off lines of code are not indicative of performance. It's an old programmers saying that 95% of the work is done in 1% of the code(or something variation of numbers.) The fact you don't know that makes it very clear to me that you don't actually understand whats going on behind the scenes of things that happen.
Second, if we were to go off lines of code minecraft has a microscopic amount of code compared to some of the libraries it calls, I don't think you understand how much code is actually involved in decoding and playing a sound or dealing with a packet. Most of that functionality is hidden away in libraries and yet the computer still has to execute every single line of that code.
Thirdly, most of minecraft's code is simple instructions, like most programs. Programs are quite literally directions on how to do things and unless the code is specified elsewhere you have to write every single bit and bob yourself, that can be as simple as a line that takes one cpu cycle to complete that sets one variable to a certain numbers, or it can be a loop that draws every object in the game through abstraction.
Honestly I find it rather insulting that you try and act like you know so much when you clearly don't seem to understand the sheer amount of code behind those "3 or 4 lines to do something." Things like OpenGL or DirectX are massive drivers utilizing a ton of hardware functionality optimization, code is counted in what it does, not how many lines it does it in.
You do realize that a CPU is running your entire computer right, it's handling everything from decoding and playing sound in multiple windows, mixing it, handling every single input from your mouse when you move it a single step, networking, the entire operating system and all its drivers and context changing. Your CPU is doing more work in a second than you probably will in your entire life. For a CPU to be bogged down to max from a game is saying something about the massive number of calculations it is doing. Ontop of that when you consider that every year or two processor power almost doubles you're talking about hundreds of billions of instruction capability difference over a short period of time.
Minecraft isn't even built to be large scale and it still has a ton of processing to do simply because its a game, a sandbox game that has to operate and tick a lot of things in a dynamic environment. It's not any different from the video card having too much to render to maintain 60 fps.
To point out why it used to use so much memory? Stop being defensive, why did you even bring it up if you're just going to brush it under the rug when it sounds like I have half a clue what I'm talking about? If it doesn't matter than either say it doesn't or don't bring it up at all. If you want me to be more specific it was duplicating textures everytime it binded them for opengl rendering, which is why using large texture packs ended up taking up gigabytes of memory.
Most people I've seen that have FPS problems on Minecraft have a CPU bottleneck, not a GPU bottleneck. If they don't its usually because both of them are terrible, so its a mutual problem.
You realize the CPU is always doing something too right? Your video card is spending all its time trying to put things on your screen and the CPU is always doing something either related to your program or to other programs and the operating system. It's not like it sits idle and does nothing. To me it seems like you don't understand the relationship with a CPU to a GPU anyway, you actually will always gain more FPS by upgrading your CPU. The faster it finishes all its updating the earlier the video card has to begin rendering a frame, a bottleneck is simply a case where the cpu is taking an inordinate amount of time(most of a second) to process all the data, thus starving the video card of being able to do anything but wait for its chance to get up and start drawing.
Yes except a CPU does instruction decoding and quite a wide variety of operations compared to a GPU. If you knew what you're talking about you would know that you're talking about floating point operations per second, GPU's are designed heavily to do quick and simple floating point operations per second. A GPU also doesn't use the same architecture as a CPU at all, it has a ton of really small and simple processing cores whereas a CPU uses large monolithic ones. In terms of performance a CPU core isn't even in the same plane of existance as GPU processor cores, it's like comparing an aircraft carrier to a fleet of biplanes. There's just a hell of a lot of biplanes.
The computer I'm on right now is -not- high end, it's five years old actually. It has to do with a combination of the drivers, support on the particular video card and how minecraft actually utilizes the occlusion culling. Without going into much detail, it usually sucks. In fact for most people I've seen the "advanced opengl" is more like "hurr de durr opengl."
First off I don't know what vsync has to do with hundreds of FPS. Secondly vsync is just clamping you FPS to your monitor refresh rate, you can cap the rendering using any amount of time. Some games do. People just tend to call it vsync for simplicity. In fact if you change minecraft's goofy as hell render settings it basically puts various caps on your framerate.
I wasn't talking about that, though broaching that complex topic. Heat is not the only reason a card cannot operate at a certain frequency. It has to do linear operations in a lot of places so it ends up waiting on all its processors to finish dealing with pixels per frame, and things like that. It's not a simplistic process. On top of that clocks cause problems due to the lag time in signal propagation usually.
But I won't go into that crap, thats hardware problems that would take up half a page.
If you don't mean that the card is constantly doing operations and thus running at 100% compared to how you seem to think the CPU isn't.. then.. what is it you mean? I don't understand.
As much as I love you telling me how you think things work I should tell you that you're using a massive simplification. Most game loops don't block when time hasn't passed, even if your CPU can make that 20 times a second it simply loops indefinitely doing one draw call at a time and stopping to process a tick when enough time has passed.
I.e. The video card starts rendering the next frame as soon as the CPU finishes a tick, if that tick takes 5% of a second that means for 5% of that second the GPU is sitting there waiting patiently(in theory anyway, in reality its probably drawing something still.) if the CPU takes 1% of a second that means the GPU can start rendering that much earlier, meaning you'll get more FPS.
A CPU "bottlenecking" a card is a bit of a misnomer, they're always "bottlenecking" each other. You can still fail to render 60 FPS due to the CPU being slow even if it met that time quota of 20 times a second. The relationship is directly proportional to how long each thing takes to do its job.
Well they are, do you really think doing AI for something like 30 mobs is more intense than calculating light for thousands of blocks? I'm not sure why I even have to explain this, you can profile the code and see where it spends most of its time.
That depends on the type of sound, decoding certain compressed sounds or things like video files take quite a bit of time. Enough to have choked early processors. Of course these days when processors can do billions of operations a second they don't have much trouble keeping up. Performance reduction is additive, it takes a lot of things to cap your CPU.
Depends on what you mean by intensive, most of the time spent on game processing is doing a simple task a couple thousand times like lighting updates, doing sound decoding and playing is much more performance hungry than a few of those updates but certainly not even a fraction of doing all of them together.
That's really not true, libraries hide a lot of code to where saying one thing can mean doing something completely different. Most of the time what slows down games is doing things like memory copying or accessing the hard drive, simple arithmetic operations are pretty fast. I remember watching a game dev conference speaker that does professional game profiling say he once improved the overall performance of a game by like 30% simply by changing how it dealt with a container object. Copying and throwing out objects or copying them in a non-efficient way slowed the entire game down that much.
It's still code, a GPU is a microprocessor and uses code to accomplish all its operations, just like anything else in electronics that has a controller.
It doesn't usually work like that, threads are constantly context switching, Minecraft likely get picked up by a few cores unless the OS decides it should exclusively do its work on one(something you can change.)
Cores are also a portion of the CPU so I don't see where my point is lost, do you really think one core is doing nothing but running minecraft? Hahah no. There's probably tens of thousands of lines of background kernal processing running every second, heck it has to do scheduling and all those things regardless. My point was that there is more contributing to overloading a CPU core than just Minecraft. Which I admit I went off on a tangent about a bit.
It will be used on something else, CPU time isn't reserved the CPU just does as much as it can before it runs out of time to do it. A game like Pong has little to process and little to draw, so it is going to pump out thousands of FPS in a second.
Except it doesn't use that much memory it RESERVES that much memory, something that Java requires you to do. If you actually watch your memory in usage while playing it does not go that high now. I don't really get your point or how this relates to CPU/GPU time myself.
That's completely wrong and fundamentally flawed, it will always give you more frames. People test that all the time and even post it in benchmarks. However in a case where the system is really bottlenecked it is like I said, they might only get 2-3 more FPS with a CPU upgrade than a video card upgrade, because the linear curve of it means the CPU is only letting the GPU begin work a tiny bit earlier due to how fast the CPU already processes its workload.
To me you still seem to be ignoring what I am saying and not understanding the fact that the GPU is always going to be running at near 100% load unless you cap your framerate, because the GPU has a set number of tasks to do just like the CPU but it will do them an -infinite- number of times until the CPU next has to process. It will ALWAYS be running at full capacity.
Its not really a contradiction to compare CPU cores to GPU cores, of which I said, CPU cores are absurdly more powerful. GPU's get most of their overall FLOPs count from a lot of simple processors working in parallel.
Found that where? A lot of games use occlusion culling built in because its supposed to be a performance increase, its debatable if it works in Minecraft due to a lot of variables. I've never really seen many issues with it unless its doing something like minecraft probably is where its likely checking a ton of geometry and only removing a small portion of it, thus wasted processing time.
Fun fact, CPU cores don't sit idle, with windows at least they are always doing some background processing that the OS needs to tidy files up, the task scheduler does that. It will say your CPU core is using 2% if it is using 2% of its per second available time on tasks other than system tasks. Your GPU would display itself not running at 100% as well if you cap your framerate because it won't be issued draw commands. In a game like Minecraft it will always keep issuing a new one, so again your video card is going to be running at max constantly.
Like I said.
It was just an example, the point is that the CPU can still affect the FPS by its speed even if it is making the 20 ticks a second, in fact you can gain or lose frames even if it isn't meeting that number because the CPU while being slower than it needs to be in order to maintain 20 FPS, may still be running faster than it would in order to maintain 5 FPS or something. Again, the relationship is linear, if you have a stronger video card it will render faster and if you have a stronger CPU it will do ticks faster, thus giving the other thing more time to accomplish its task. Slow or not.
What I mean by bottlenecking is that they're always waiting on each other, the time that they are isn't totally linear and changes based on how much processing each has to do, but in general you see some kind of performance relationship happen. In a real "bottleneck" situation the CPU is probably taking 10% of the time to process everything and the GPU is taking 90% in that case you obviously should upgrade your video card for the biggest gains.
I don't think that's true. I've tried minecraft on an SSD, a 5400rpm HDD, 3 different 7200rpm HDD's and a ramdisk and none of them made any difference to chunk loading times.
I would say CPU and GPU are both important in minecraft, it's kinda difficult to say which one makes a bigger difference. I would probably put the GPU slightly ahead of the CPU in terms of importance though.
I upgraded to a 6870 from a 5670 and got a big gain to fps in minecraft. Then I upgraded from a 3.3ghz core 2 duo CPU to a stock 2500k and it made about the same jump in fps. Kinda hard to tell which one was more important.
This is incredible.
I made this post when I was just 12 years old, and have now stumbled across it after an late night rabbit hole journey.
This was an incredible laugh reading about my awful at home school computer that melted because I forgot to plug the CPU fan back in. the CPU was in fact 1.06 ghz and the PSU was indeed a 150w, It was a short form factor dell OptiPlex that was archaic when I got it. I did eventually build a proper desktop when I was 15 and had a job, but I never did get my windows XP back.
I am now happily married and when ever my wife isn't looking I hop on Minecraft for a bit.
god my spelling was atroshu... atrocius, atro... bad, god my spelling was bad.
~Tundra i.e., Aquaman1125