Some of the discussions of quality are a bit confusing. a MOSFET is simply a transistor; and I find it odd that there is hardly ever discussion of the quality of the transistors in CPUs or GPUs or other integrated circuits, but suddenly with VRMs it's important. Additionally, they are rather basic components in the same sense as diodes, resistors, and capacitors. So preferring VRMs and the MOSFET components as far as determining whether something is a good board seems to be looking at only one ingredient of an omelette. And Given that this is extremely common, it's possible manufacturers creating the boards are focussing on those components rather than on the others. They'll proudly tell you about their awesome MOSFETs, but maybe they kind of leave out how they bought all the resistors from a flea market and they were installed by a colour-blind technician.
Another issue is that there is a lot of reliance on the expertise of others, and it isn't always accurate; this can cause a lot of weird myths to spread around over time about what components are good, which components are better, etc. So ideally instead of just going "well, these boards have good VRMs because they have such and such MOSFETs" It seems ideal to instead equip the reader with the information they need to come to the determination that "Oh, I see, because of Such and such, the MOSFET performs it's job better, and the VRM is a better unit as a result". This prevents attaching say brands to a MOSFET as an indicator of quality (I'll get back to that historically), while also hopefully providing the reader with some useful information.
In this case, we would need to know just what in the heck a Voltage Regulator Module is.
a VRM is a component for stepping voltages down for the CPU and other motherboard components. a VRM consists of a PWM controller, MOSFETs, power phases, chokes, and channels. These components are designed as part of the VRM and the VRM as mentioned convert input power voltages from the power supply (12v, 5 v, and 3.3v) to (primarily) the lower voltages that the CPU uses. (1.2v, 1.5v, etc). Also historically older motherboards might have VRMs for stepping 5v down to 3.3v (before power supplies had a 3.3v output). That is not really important now.
So at this point we are left wondering why a VRM's quality might effect overclocking. Consider, for example, that the VRM is going to be used for stepping voltages down anyway- CPUs don't use 3.3v. So it's going to play a role either way. The main concern for overclocking is the amount the VRM is going to heat up. the higher the output voltage, the higher the heat output, and the more you overclock a CPU the more you are going to have to increase the CPU voltage. In fact I'd go so far as to say that heat is practically the only concern when it comes to VRMs and overclocking.
Now, on the MOSFETs. what role do they play? They are Active transistors, and they convert the 12v voltage from the power supply into the Voltage required by the CPU. The reason these components are considered the most crucial is for several reasons:
-They are the source of power for the CPU
-They generate the most heat.
-They are typically the most fragile within a VRM.
Following this, a higher quality MOSFET will produce less heat, and output cleaner power which translates to a higher life for all the components involved.
But again, the MOSFET itself is only but one piece of this equation. A VRM also has, as mentioned, Power Phases. the best analogy for Power Phases is that they are the "power supply" and they are called as such because they get phased through. More power phases essentially means less load on Power phases themselves; in the same sense that more MOSFETs means less load on each one. Less load means less stress for the components means less heat means less line noise means more stable overclocks. (and it's not like a stock CPU won't benefit either- bear in mind that these are internal components- a poor VRM can easily fry a CPU because the VRM is on the other side of the Power supply protection. The Power Supply is designed to shut off when it starts delivering poor quality power, but the VRM is supposed to always be providing clean power to the CPU and motherboard components that use lower voltages as long as the power supply is doing it's job. If it screws up the CPU has no safeword. When the VRM sends it bad power and asks (VRMs always yell. It's part of the result of being fed 12v power, same with the deviancy. They really ought to be pitied)
"Yeah HOW DO YOU LIKE THAT, YEAH YOU LIKE IT THAT WAY, DON'T YOU"
"Stop you're hurting me"
"YEAH, THAT'S GOOD. TAKE THAT POWER YOU ARE A BAD CPU AREN'T YOU. "
"I'm Going to be if you keep giving me that power"
"I'M NOT A BAD VRM I'M JUST DESIGNED THAT WAY. NO MATTER WHAT I TRY, I CAN'T HELP BUT HURT THOSE CLOSE TO ME"
Naturally the result of this is that the Memory never forgets and is traumatized for life, or something.
So basically one shouldn't completely disregard the effect of a reasonably machined VRM in their machine even if they aren't overclocking.
Returning the the VRM, we also have the Pulse-Width Modulator. This is a frequency output that VRMs use to stabilize and cleanup the power going through. the PWM controllers are on the motherboard as part of the chipset or as a separate chip at arm's length from the VRM (A wise choice). the PWM has a critical function as well, since it's signal effectively controls what the VRM does.
This does still bring us to another point, in that even though the VRMs are important, it's not like motherboards have redundant capacitors. Capacitors are another thing I think should be well-considered- the issue with capacitors is that with the exception of tantalums the can-types tend to kinda burst over time. Especially lower-quality ones. That wouldn't be so bad if the capacitors didn't burst and basically spunk all over the inside of the system. I think I may be going to far with my earlier analogy. Another issue is that by focusing on one thing the manufacturers can focus on that one thing to the detriment of other components, too.
And I think that may be considered to be a possible problem with those sorts of expensive boards. They make a lot of noise about "OMG VRM Patented design using 30 power phases arranged using Golden Helix technology with a metal diode contractor component!" or whatever, but they might be less forthcoming "Also can capacitors use low-quality parts and may burst within 5 years. Patent pending"
- TheFieldZy
- Curse Premium
-
Member for 14 years, 4 months, and 18 days
Last active Sun, Sep, 7 2014 15:36:55
- 0 Followers
- 4,003 Total Posts
- 287 Thanks
-
5
BC_Programming posted a message on Stop hating on Maximus boards (and other expensive motherboards)Posted in: Computer Science and Technology -
13
BC_Programming posted a message on Free software : How understandPosted in: Hardware & Software Support
What a graceful man...
Some other wonderful quotes from this brilliant Free Software pioneer:
A parrot once had sex with me. I did not recognize the act as sex until it was explained to me afterward, but being stroked by his soft feathers was so pleasurable that I yearn for another chance
As a matter of principle, I refuse to own a tie.
I have several free web browsers on my laptop, but I generally do not look at web sites from my own machine, aside from a few sites operated for or by the GNU Project, FSF or me. I fetch web pages from other sites by sending mail to a program that fetches them, much like wget, and then mails them back to me.
A year later, the same TI group manager approached me about writing some documentation about the internals of the Lisp machine system; he invited me to dinner. I required him to demonstrate nasal sex in public with a plant as a condition of meeting him. Fortunately, the restaurant provided suitable flowers. He tried it, and even liked it! Which goes to show that no one is incapable of personal growth.
(eg, "I know nothing about modern computing and still do it the same way that I did at MIT before I dropped out")
Not to mention his Prima-Donna speaking rules.
I probably know more about it than you do.
The Free Software Movement, launched by Richard Matthew Stallman in the 80th is :
No. It's called the Free Software Foundation. It was launched by Stallman because he wanted to keep siphoning off the software skills of people like Richard Greenblatt and Peter Samson, and wants to keep living in those final days of the MIT Model Railroad Club before the industry surrounding it was born, when he could freely take other peoples code and ideas because sharing was encouraged.
-A way to develop, no more effective or less effective, or more lucrative or less lucrative but more user-friendly.
This is false. Open Source, and in particular, GPL/FSF compatible development is burdened by two things. The first is the figurehead doofus, Richard, who doesn't use any browsers and does all his networking with Email and reads websites using wget and emacs, because he is just that devoted to living in the past. Most importantly, the biggest problem for FSF-compatible development is that it puts the license on the code above that codes function or usability. No matter how functionally superior or user friendly a piece of software is, if it doesn't have a "FSF-approved" license, it must be discarded. This is why the FSF's "visitors" to the Windows 8 launched were giving people the Linux distribution "trisquel" because it is only the most niche distributions that adhere to the FSFs ridiculous dictations and get the approval of the almightly neckbeard in chief.
If a software is a free, that does not means it costs zero dollars (or zero pounds, or zero euros, etc.)
Yes and no. You can charge for compiled binaries, however, you must make the source code available to anybody that wants it.
0 - Use the software without restrictions.
Except those imposed by the GPL.
Learn useful things by reading the source code.
Which, of course, nobody actually does, which is why Apache had backdoors for nearly 6 years before anybody noticed- not to mention the SSL flaw introduced by being perhaps too trusting of the "Many Eyes" approach.
This 4 freedoms is the base of the free software. An simple-to-implement recipe (especially with free software licenses) without disadvantages and one big advantage : the benefit of the community.
You'd think so, but the FSF seems to be a political platform to push Richard Stallmans personal agenda's to the detriment of the community it claims to represent. The Free Software community would be much better off without his support of pedophilia and ill-informed ramblings being posted as a representation of the whole.
There are so many free software in our life (and especially a geek's life), some examples
Don't fool yourself. I've worked with computers for over a decade and while I do like most of the things that are made available and the Open Source community in general, I find the FSF, GPL, and the community surrounding the FSF and it's politics to be entirely toxic to the process of software development.
The GNU/Linux system, mistakenly frequently called Linux, but Linux is the name of the kernel.
Actually, less than 10% of a typical Linux distribution is comprised of FSF GNU components. It's notable that if something is under GPL, that does not make it GNU. The only components of a typical distribution that are GNU are the GNU coreutils and GCC; both of these are quite easily replaced, and most important consist of a very large minority of the codebase. Insisting that the product name include credit of that is a very self-important attitude, considering the minority and relative non-importance of that contribution.
In fact, there was a code % count done on the entire codebase of a typical Linux distribution. The result? 8% of the entire distribution was GNU. Thus dictating that it should be called GNU/Linux is based on 0 facts; to claim that Linux is only a tiny component of the entire system is to also ignore the fact that the Kernel code is actually a far more significant portion of the typical distribution in question. It's Linux. If you want to call a System GNU in any capacity, wait for the precious HURD; Without the kernel there wouldn't be a GNU anyway, so to try to now dictate that the project (Linux) that basically made GNU viable should now include GNU it's in name, despite the fact that a large majority of distributions actually have very little portions of the GNU project (having found much better alternatives, such as CLang in some cases) is stupid at best.
although quite daunting by its command line interface has a larger market share than Windows (Windoze, as we call it, in the free software community).
None of what you said makes sense. The following comparisons do not compare to "Windows" anyway.
MySQL, the database system often used
And which is eclipsed in overall usage by Oracle and SQL Server...
Apache Server, a server software frequently used
And has been losing market share to IIS for the last decade thanks to the fact that it had backdoors and a completely broken SSL certificate verification for a few years, completely shaking the ide that Free Software is somehow more secure. You still have to trust the competence of the developers.
-PHP, Perl, Ruby, Python... The free languages are very used.
Python isn't GPL. it uses it's own license that happens to be GPL compatible. Ruby is under the BSD license, and PHP uses a BSD style license. BSD!=GPL and neither are "GNU". The only thing listed that is even under the GNU license is Perl.
In fact, technically, you could add all Microsoft's .NET compilers to that list. All Roslyn Compilers are now Open Source under the Apache License.
-GLAMP (GNU/Linux, Apache, MySQL, PHP), a combination of software often used for servers, it includes a complete operating system (GNU / Linux) software for servers (Apache), a programming language server (PHP) system and a database (MySQL ).
LAMP's popularity is primarily due to it's ease of access. In particular, a webhost is far more likely to use a shared hosting system using a LAMP setup than using IIS or ASP.NET, simply because the latter has licensing considerations which apply to such shared environments. It is not necessarily functionally superior; which is pretty obvious given the fact that PHP is such a crappy programming language (not surprising given it was never designed to be one).
The BSD-based systems.
BSD based systems are NOT GNU. They are Not 'Free Software' by the terms on which you are claiming. BSD adherents choose BSD because they find the GPL to restrict their freedom.
-The entire Mozilla suite: Firefox, Thunderbird, Sunbird, Seamonkey...
Which are all under the GPL compatible MPL, not the GPL. And again, something being GPL'd does not mean it is a component of "GNU". That's downright stupid and tries to assign credit where it does not belong.
The list is very, very, very long.
Yes, and the list is doctored. GPL!=GNU. If I release something under the GPL, that project does not become a GNU project. GNU projects would be projects managed by the Free Software Foundation.
All the ideas that the sharing deprives businesses of their money are false ideas.
Are they? All the most successful Open Source projects have financial backing from a company. Firefox has Mozilla, which is backed almost entirely by agreements with Google. Ubuntu has Canonical, and RHEL has Red Hat.
This belief is because proprietary software companies do believe that all users of copied versions would have bought the official version
the FSF and GPL have absolutely nothing to do with Piracy and they do not condone it in any way. They support the use of Free alternatives and no proprietary company 'believes' the things you claim they do.
while most users do precisely copied versions either becaufe:
-They are minors and their parents do not want to buy
-They want to test
-They do not have the money
-etc.
This has nothing to do with Free software. That is piracy.
The free software is not a development methodology is a philosophy and a political movement.
Agreed. This is it's biggest problem. They don't actually do anything. They just tell others what they should do.
Which license to choose?
BSD/MIT. If you want to Open Source, use BSD/MIT IMO because the GPL ironically restricts the freedom that the code can be used.
Some also speak of open source software. The definition is very similar but differs on some points. In general, the open source movement emphasizes the practical aspect rather than the philosophical aspect.
Open Source is "Free/Libre Software" with the self-important, anti-social political garbage spewing from a skin-eating, pedophilia supporting douchebag taken out and replaced with people who actually want to make software better. the GNU and FSF don't want to make software better- hell, Richard doesn't even use modern software, he uses emacs and wget to read websites for goodness sake- if anybody is out of the loop as to what users actually need, he's it.
They say for example that the software will be more flexible and more efficient. This is true, but it's not the priority. The priority of the free software movement is to deny proprietary software.
Which is why it's a failure. It's an approach based on a negative. To the FSF, It doesn't matter if a piece of software is easier to use, more friendly, and all around better for the user- if it's not under a license that they approve it is considered unethical and implied as evil. Software should be judged on it's own individual merits, not on the license on which it's source code is licensed.
-Do not use proprietary software
No. In fact, I help write it. Companies pay for the software I help write, and my company pays me. and I make a living.
There is no Open Source alternative for the software I write, nor a variety of other business software. Why? because it's not fun to write. Kernels aren't fun to write, which is why it took GNU something like 30 years to get a HURD kernel.
Create posts like this post for encourage peoples
Right, because regurgitating a paraphrase of the classic Stallman copypasta is totally going to get you support.
-Why not be volunteer to the FSF?
Because I use deodorant. kind of disqualifies me.
Replying rules
You cannot impose rules on who replies to your thread. Besides, doing so is simply unethical! See I can make silly generalizations too.
-At the same time, ask to [email protected] too. Volunteers from the FSF will reply.
Unless of course they are at a Magic:The Gathering tournament.
-Don't be stupid : this is free as in freedom.
-Don't say "Linux" if you want to say about the GNU/Linux operating system. Linux is just the kernel, an important part of the system, but other parts are too important.
"This is about freedom! Oh, you're not allowed to call it Linux" In other words, it's about freedom to be told what is and is not free and what I should and should not use. Free Software would advance much farther without the deadweight of the fat sloppy, nostalgic nerd Stallman at the helm. As it is I avoid GPL at every possible opportunity because I don't want to even remotely associate myself with a person who supports pedophilia, thinks contributing to emacs is the best thing anybody can do in the world, hopes that people screw his body after he's dead, and has the social awareness of a vampire's butthole.
-
2
BC_Programming posted a message on Programming ideaThis is already done. It's called a lookup table.Posted in: Hardware & Software Support
eg. Sin and Cos are more 'expensive' trigonometric functions many games or programs that make heavy use of the two functions but don't need perfect accuracy create a lookup table. and fill it with the results of a full circle of angles; eg 360 values from 0 to 2*PI. Then the function overload simply converts the input angle to a array index and returns the value.
That only works for functions that can only accept a specific range of values (or can be converted into a specific range of values. sqrt doesn't work because you can sqrt anything.
A HashMap lookup is more expensive than the sqrt function Once you start using larger buckets. And you've still got the problem that you have to somehow store the age of the value so that you can throw out old values when you store a new one- so now you've got this more expensive remove operation when a new calculation is made.
You can't just let it accumulate forever. It would be a cache but it wouldn't have a policy for discarding information. a Cache with a bad policy is just another name for a memory leak. -
2
BC_Programming posted a message on How to learn to program games?Posted in: Computer Science and Technology
He already has issues with build times. separating the declarations into a separate header file improves build times because the header file is what can get compiled against- it is the linker that will do the job of actually matching it up to the actual implementation.
In this vein the point is that even though you technically have a choice, separating them into a .cpp and a .h is going to work out better almost every time. Not to mention that it makes it possible to link against already compiled .obj files which drastically improves build times, because the .cpp files that are already compiled can simply be linked against rather than recompiling the source files to object files.
How is your file structure in your work project organized?
That's not related to his problem. We can safely assume that he is (smartly) using the separate .cpp and .h files since he has pointed out how dumb it is that it is required for any sort of usability in the compiler. the C++ compiler is slow because C++ is one of the most difficult languages to parse. For example:
a b(c);
Variable definition or function declaration? if "c" is a variable, than it defines a variable named b of type a, that is directly initialized with c. However if c is a type, than it declares a function named b that takes a c and returns an a.
Can be fixed by always compiling as a debug build, then building it as release when you need to release the product.
This doesn't fix any of the problems mentioned. Debug builds are slow even if they are incremental with existing debug built .obj files. And performance is still going to be slow. When it comes to C++ you shouldn't be using a debug build unless you are specifically debugging anyway.
Well, considering the 2 main C++ compilers support it...
Microsoft's compiler uses an _asm{} block. gcc uses asm("assembly code");
Also, within the in-lined assembly itself, Microsoft's compiler uses Intel syntax, whereas gcc uses AT&T syntax. which means that assembly you write inline in one compiler isn't going to work in the other even if you ifdef to change the block declaration. And that is so far ignoring gcc's "extended ASM" features.
If you were developing on other architectures, I can see that as a problem.
Even with the two compilers you mentioned it's already a nightmare. "movl _x, %eax" in the gcc asm inliner has to be changed to "mov eax, [_x]" for VC.
Even ignoring that, it becomes a problem because modern systems support two architectures- x64 and x86. C# and Java- unless coded poorly- can be switched to use x64, x86, or the highest available with a compiler switch- it technically just controls the actual compiler output of the jitter.
With C++ you have to recompile everything to switch.
guess what this does for the ASM? Well, thankfully, it's mostly backwards compatible...
however, usually ASM is used for squeezing the best performance via hand-tuned instructions via cycle counting. Fact is that changing the architecture- even simply moving to a new CPU (eg 486->Pentium caused many Assembly optimizations to destroy performance rather than improve it because they caused the dual-pipelined Pentium CPU to discard entire prefetch queues for certain branch operations)
And again- it's non-standard. Arguing that it's De Facto and therefore OK seems to completely ignore the prior art and evidence about how bad that is- which is a lesson already learned by both C and C++.
-
3
BC_Programming posted a message on How to learn to program games?The focus on what language is good for writing games is about as useful as focusing on what kind of hammer is good for murdering pedophiles. Fact is that 99.9% of real-world hammer usage is not used for murdering pedophiles and therefore focussing on desirable hammer traits for that purpose will not give you a hammer that does any good for other tasks that use a hammer.Posted in: Computer Science and Technology
-
2
Yourself posted a message on How to learn to program games?Posted in: Computer Science and TechnologyQuote from Fred2000789
Why?
There are libraries and game engines. You wont have to touch low-level things.
Well C++ can run on more places than Java, and I'd say C# is pretty cross-platform too.
That's not what I was getting at.
C++ is not a good place to start because it's a terribly bloated language that has a lot of confusing features. At similar skill levels between both languages, it is almost always faster to develop something of equivalent functionality in Java than it is in C++.
I think perhaps one of those cumbersome things I encounter in C++ is just how files usually end up organized. At work one of the more annoying things I have to do in C++ is add a member variable because it requires modifying two separate files: the header and the source. If you want to avoid that you end up inlining everything into header files and if the project ends up being any reasonable size, you'll start getting irritated by build times, because just about any change results in a full rebuild.
Even build times on the order of 10s of seconds get frustrating when you're trying to debug problems where you're constantly making changes and then rebuilding and running the code again to see if the problem's been fixed.
Speaking of debugging, the support for C++ debugging is not terrific. I'm not sure how it works in Java, but debugging things line-by-line in C# is an absolute breeze compared to C++. If I want to debug something in C++, I have to do a debug build, which usually means a full rebuild that ends up taking significantly more time than a release build. The resulting code will then have horrendous performance which can be extremely frustrating if you're waiting for it to load resources in completely un-optimized code.
Which, by the way, is what gives C++ its performance: compiler optimizations. This is something that C# and Java don't really have to a significant degree. Those languages leave optimizations to the JIT, which can perform some optimizations that a compiler can't, because the JIT has access to runtime behavior where a compiler can only reason statically.
If you want to develop a game faster, then use C#. I'm telling you that Java isn't the right tool here.
Except there's only so much you can do on a virtual machine.
That makes no sense whatsoever.
I'd say C++. But we still know close to nothing about what you want your game to be.
C# would be my next choice.
For beginners there's not going to be a significant difference in effort between Java or C#. Most of the additional features that C# provides are something that a beginner probably wouldn't need to deal with to a significant degree anyway (and anyone asking what language to use is a beginner).
There is plenty of stuff you can do to optimize even in a virtual machine environment (which is mostly a non-issue). If you want good performance, pick good algorithms, don't fret over what language you're going to use. And don't under any circumstances break the two rules of optimization:
1. Don't do it.
2. Don't do it yet.
Optimization comes last and only when and where it's needed. Premature optimization is the tool of the devil.
Quote from Fred2000789
Every language can create a game. Whether the game will run at a desired rate is another matter entirely.
But,
Things Java can't do that C++ can:
Pointers
Memory Management
inline assembly (Not used by me, but some do)
More platforms
Easy access to binary data
Operator overloading
Inline assembly is not a feature of C++, it's a compiler extension and it ends up being different depending on which compiler you're using.
Technically C# has something similar since it provides you with the ability to emit IL at runtime (basically letting you write compiled at runtime). Similar things exist for Java.
I also don't know what you mean by "easy access to binary data". Reading and writing binary data is pretty similar between all languages since they typically abstract it away as streams. I guess about the only difference is that C++ can reinterpret_cast a pointer to an arbitrary object and then screw with the bytes. I wouldn't really advertise that as a feature because it's really just an example of a lack of type safety in C++.
Also C++ doesn't technically feature any actual memory management features (perhaps with the exception of smart pointers nowadays). The programmer handles all memory management in C++. Java and C# handle memory management for you since garbage collection is kind of a core feature of both languages. -
2
CosmicSpore posted a message on How to learn to program games?Posted in: Computer Science and TechnologyQuote from TheFieldZy
Let me ask this: What do you do in C++ for game development that can't be done in Java?
In high-quality game development, C++ can squeeze out extra optimization where memory-managed languages like Java and C# can't.
For indie developers, the only reason you might want C++ over Java is if you didn't want to rely on the users installing (the correct version of) Java and instead wanted a native program that doesn't require additional software.
For beginners, I'd probably recommend C#, though.
Not only is it quicker to build programs from scratch, and is more 'user-friendly', but it also has the XNA framework which is simply beneficial for new game developers. It is much easier to use XNA in C# than it is to use SDL or SFML in C++.
An indie developer could develop a game in a much more reasonable amount of time using tools you can find in C#, than you could using C++. (Unless you already know C++ very well, but don't know any C#, when the time to create something would be subjective and be biased towards previous knowledge.)
And squeezing out optimization is not really an issue for indies.
There is nothing they could ever do to maximize CPU or GPU resources without programming badly. C++ would not help you there, if anything it would just give you errors instead.
C++ is still a good language, though. It's good for a programmer to understand memory management even if they always use a memory-managed language. It's just not 'beginner friendly' as they say, for this reason.
Understanding memory management helps you get a grasp on pointers, references, and such... Which are things still used in other languages like C#, even if you aren't using them directly anymore. They are still 'there'... The compiler is just doing a lot of it for you. -
1
BKrenz posted a message on How to learn to program games?C++ is not where to start.Posted in: Computer Science and Technology
Go with C#. -
1
Yourself posted a message on How to learn to program games?Posted in: Computer Science and TechnologyName one reason why programming this game in Java would benefit the game in any way.
Less time dinking around with low level crap and boilerplate and more time dinking around with actual game design and development.
Quote from Fred2000789
I'm trying to tell you that Java isn't made for this type of stuff. You're only hurting yourself.
C++ isn't made for games, either. Actually, pretty much all general purpose programming languages aren't made for games.
I will say that if your goal is to actually accomplish something, then Java or C# would be a better choice than C++. The less time you have to spend messing around with low-level stuff, the better.
Personally, I prefer C#, just because I find it to be a more expressive language (I also think the whole type erasure thing for Java generics is kind of stupid; but that's going to be mostly a non-issue in this case). -
2
BKrenz posted a message on How to learn to program games?Posted in: Computer Science and TechnologyQuote from Fred2000789
EDIT: Sorry, I was thinking about another thread similar to this. Either way, do not learn Java. It isn't required for what you're doing and it's not the right tool for the job.
Could you explain why it isn't? - To post a comment, please login.
1
ALL the choices? Think for a moment exactly how many choices there might be. More than can ever be listed or have ever been listed.
1
Wow, wow, wow. You think you can just change ISP? Slow down there. You think if that was an option all the crappy ISPs would have no customers. Haven't you noticed your ISP is only available in a few areas? Why? Because all the ISPs got together, split up the map, price fixed. They all do the same thing at the same price. You might have another option, but its too difficult to break the oligopoly going on for there to be any good option. So good luck with that.
1
Well, just calling it oxygen could refer to a single atom of oxygen, which is not the same thing in the air, since it is a diatomic molecule. I could say oxygen is bonded to a molecule. Saying oxygen gas is simply specifying that you mean O2.
2
Oh you mean back in the 1920-30s after WWI? Yeah, nothings changed over the past 80 years or so.
1
The best way to prove it is double blind tests. Simple as that. Again, I'm not saying whether there is a difference or not, but your brain tricks you, a lot. AKA, the McGurk Effect:
[media][media][/media[/media]]
Yes, either way, there is expectational bias.
The problem with your test is that you knew what the framerate was. This is what induces the McGurk Effect. In an optimal testing environment, you wouldn't know what the framerate was locked to.
I've only found one double blind test on this matter. So far it seems most people can see a difference, about 86%. And about 88% could tell the difference between each monitor.
My point is that anecdotes don't prove anything.
For example, a group of test subjects were given two wines, one expensive, one cheap. The half of the group that were unaware of the price preferred the cheaper wine. The half that knew the prices preferred the more expensive wine.
1
Side by side you are more likely to see a difference. I'm not saying its impossible to see a difference, but I only trust hard science and double blind tests for stuff like this.
2
Or is your brain telling you that you can see a difference? Expectational bias, as its called, can have you perceive a difference even if there is none. If I displayed two monitors in front of you, and I said the one on the left was 60hz and the one on the right was 120hz, you would see a difference. But in reality, both monitors were actually 60hz.
And that is why personal anecdotes are hardly evidence for "is it worth it".
2
1
1