Wouldn't the effort technically be for little new you?
- Retired Staff
Member for 9 years, 2 months, and 19 days
Last active Sat, Oct, 21 2017 03:43:47
- 1 Follower
- 5,097 Total Posts
- 517 Thanks
Sep 9, 2015BKrenz posted a message on Need help ASAP - Friend died and need to access his PC - Windows 10Posted in: Computer Science and Technology
Think you meant BC there. I never chimed into this conversation.
Sad situation for you there Mhyles. Not much is going on here anymore anyways. Kinda crappy community anymore. Hope things are going well.
Jul 28, 2015Posted in: Computer Science and Technology
That would not be an issue with the JDK. That would be an issue with you not understanding the language. Likely, you're attempting to put a void declaration on a class, a constructor, or in a method that has a return statement that actually returns something.
May 31, 2015BKrenz posted a message on 980ti Launched! Nearly identical to a Titan X for $650. Thoughts?Posted in: Computer Science and Technology
I'm still not really impressed enough by any of these high ends cards to purchase one. I have my 4k monitor waiting to play 4k games, but cards just aren't ready yet it seems.
May 28, 2015Posted in: Building, Parts & Peripherals
Hey Swervish! Hope everything's well. Advice in this thread has been solid, and I'll throw my support to it.
Honestly Minecraft needs to be overhauled performance wise. I don't care what code it's in, but a game like this should not run so badly and inconsistently on medium to high end PCs that have such immense amounts of processing power. I see no reason this game on a 16 chunk render distance should be getting anything under 60fps constantly on a system like a GTX 750 and a Haswell Pentium or i3.
I feel like there's a lot going on that needs to be analyzed by professionals, of which none of us are. I'm somewhat interested in graphics, so I'm probably in a better position than most others around here. I'm still not even at hobbyist level, but I can recognize the purpose of different algorithms and techniques. So, I'll give my half a cent here, until someone more qualified comes around to figure this out.
I feel like there's a fundamental part of the game that lends itself to being exceedingly taxing on graphics algorithms. It's a difficult problem to solve. Occlusion culling algorithms in this type of game are hard. People will complain if you screw up and don't render something that's supposed to be there. It's better from a development standpoint to over-render, especially in this extremely special case that is Minecraft (the community judgement is so radically different from most other games).
May 4, 2015BKrenz posted a message on I'm building an Internet idea that doesn't use expensive servers to host!Posted in: Computer Science and Technology
*yawns* What I see here is an argument using giant walls of text.
What I believe it is is some way to open software securley without infecting anything else. That's why they invented Virtual Machines.
Disclaimer: I have no idea what you guys are talking about with this SSH and GPL stuff. But what I can say is that this system is slow, cumbersume and useless.
SSH is a protocol for accessing a remote system (generally via command line) from a local terminal. Google it.
GPL is an open source copyright license. Google it.
Apr 26, 2015Posted in: Off topic, testing and misc. chat
While it's great that some people are trying to garner more of a community feeling around here, I can't see myself playing in a generic modded map much anymore.
My style has always been a bit more grandiose, with my challenge being integrating form and function. If I want to play on my own, I tend to head over to the Vanilla server, where work is going rather slowly on my Kingdom, and another community project.
I haven't played modded minecraft in quite some time. Probably haven't enjoyed it since Agrarian Skies. Hazali and I played the hell out of that map, until it got to the grindy bits (seriously, at current production rates, it would've taken 3 weeks or so to make the 42+ million cobble we needed!). There's quite a few issues that plague the modding community, and the experience derived from their work. I feel the major one is that so many people accomplish the same things in different ways, and these new mods get included in major packs. The packs pushed out stock on the FTB launcher are just generic compilations of mods, with no specifics purposes. Bloat is awful, and performance suffers. There's quite a few great mods out there, and a few that I've never played with.
The problem with a multiplayer Minecraft experience is those who don't start from the beginning often either lag very far behind, or get to skip a lot of the grindy bits. This isn't much of a problem in Vanilla, where within 6 hours you can be pretty much complete, probably even have a full beacon if you're lucky. However, on modded servers, the problem is a lot more severe.
I also miss the community aspect of it, but that even gets problematic on modded servers. With such a wide range of system specs among staff, those of us at the high end get carried away at times. Those on the low end have extreme lag, and end up quitting because they can hardly play. Why would I play on a server if 95% of the time I'm on my own, and 15 people each have their own auto farms?
Is there a solution? Probably not. The best idea would be megateams, where modders come together to create a specific pack (think something similar to Tech World or Magic World), or a pack suited to a map (think the original FTB maps or Agrarian Skies), and actually spend time refining the code base and eliminating bugs and bloat.
TL;DR: I would like to have a focused goal, a community centric build process, and not 15 mods accomplishing the same thing. That would bring me back to modded Minecraft. Of course, I don't expect my thoughts to be echoed by anyone else, I simply thought I would share.
Also, just thought I'd question if you were planning on regular map resets, or something?
EDIT: I always love playing the old FTB insanity maps, and it'd be cool to have someone to play with. I can host.
- To post a comment, please login.
Oct 1, 2015BC_Programming posted a message on The elephant in the room: Piracy, and educating the user who might not know the risks.Posted in: Computer Science and Technology
How much money you have is not irrelevant at all, at least not to me. It's the same reason I would tell a friend it was bad if they stole mom-and-pop store but wouldn't care about stealing from Wal-Mart (not to compare Minecraft as the Wal-Mart of games or something, but solely in terms of profit, it's up there). We're talking morality here, not legality, so you have to accept it's subjective and not everyone shares your opinion / viewpoint. Or mine.
I didn't know that but I can guarantee you that all the Microsoft stockholders are vastly more wealthy than I am.
The Morality of a situation should not be discriminatory in this fashion. This approach is merely attempting to dehumanize the victim(s)- eg, "it's just a faceless megacorp!". Remember when "It's just black people!" Was basically the moral argument used to justify the ownership of human beings as property? Discriminatory exceptions applied to concrete moral situations is done by people who wish to discriminate in order to prevent cognitive dissonance. These wildly atrophied moral perspectives are effectively born of entitlement. It's the same reasoning that would apply if a person stole a car and then justified it because the person that they stole it from owned 5 cars and didn't need it, and the thief wanted a car and couldn't afford one, so they deserved it more. You cannot justify a moral choice based on the impact you think it will have weighed against your own entitlement. This sort of logic would be capable of justifying murder because you think the person you killed was crazy and was sure to kill more people than you.
If your moral framework is inconsistent with society's, you are a sociopath.
Jun 29, 2015Posted in: Computer Science and Technology
Windows is POSIX compatible. It has a complete POSIX subsystem, in fact. It isn't POSIX certified, and certain operations- such as Fork, simply have no equivalent in the Windows world so the Fork() function get's mapped to an exec() function.
The problem is less about limited POSIX compliance, and more that most software is not POSIX compatible to begin with, and that, furthermore, that not being POSIX compatible is practically a requirement for making software that isn't absolute garbage. POSIX was more a set of standards to try to get some consistency among UNIX systems at the time. It assumes a UNIX system and therefore it assumes UNIX methodologies, which do not port to other operating systems, making the introduction of full POSIX compliance to other Operating Systems pretty much impossible, since it was created Post-facto by the United States Government to describe a set of loosely competing UNIX systems to ease procurement requirements.
is an absolute nightmare for doing half of what I do. So much software isn't even built for it, or can't be built for it.
Well of course. If what you do is steeped in *nixisms, such as the configure/make/install cycle, that is going to be annoying or a nightmare. It is not impossible, though- you could use tools like cygwin- but that is merely providing you with the *nixisms you are familiar with- like providing a familiar teddy bear in a place you are completely unfamiliar.
Any piece of software could be ported to Windows or vice versa (excepting of course those that are specific to features of each OS). The problem is more that typically Software doesn't jump the divide.
None of the software I work on works on Linux. It would be 100% possible to get it working on Linux- however, it hasn't been determined to be worth investing in the time to make it compatible- it would also mean losing a lot of features or making a lot of features Windows-centric, particularly since a lot of capabilities are created using advanced p/Invoke. As a specific example, I added the capability to have a Windows Forms Listview sorted generically. Some of it would work via Mono- which provides a Winforms-supported interface to the desktop environment. However I also use various Windows API calls in order to add advanced features. However, there is no Linux equivalent that could be used from Mono, so such features would need to conditionally compile or determine that the system is a Linux system. This would also require eliminating or changing features that interact with the Windows Service Manager or the Task Scheduler to instead use the Unix Services and Crontab; Possible, but not at all trivial and a development investment that is hard to justify given that no customers have ever expressed interest in running our software in Linux. (We barely convinced some of them to move to Windows from a VAX-VMS-derived system).
It's crazy I'd need to install a bunch of third party binaries or download something as massive as Visual Studio to build things (that require tons of porting to even work, and often aren't tested or are buggy) to do what I think of basic.
Given that Windows IS POSIX compliant, it seems the core of your complaint here is "it works different". Well, I mean, fair enough. That's basically what any argument I would have against development on Linux would boil down to anyway.
Take your first sentence. You are saying it is rather crazy that you need to do extra work to compile source code into binaries.
Your average user doesn't build software regardless of their system; Windows programs are distributed as binaries and installed; on *Nix, you might be able to find binaries but generally the approach is that you download a source tarball and run the makefile. (And in my experience, spend about 30 minutes trying to figure out dependencies, though I recently got lucky with the HP Laserjet printer support for CUPS (CUPS itself being a while other ball of wax...) and only needed to add a few.
On Windows you can still use GCC. If you are referring to being able to build Windows-specific software on Windows systems, you could also use the .NET Framework SDK, which includes all the build tools.
One of the paradigm differences between Windows development and *Nix development is that on *nix, everything is CLI-based, and it is expected that your software has a make file which is really just a glorified shell script more or less (or a shell script that invokes a make file or something). Software often doesn't have installers; instead it is assumed you know what to do with a source tarball and what to do when errors occur. (This is arguably unfortunate for your average user who just wants to try the beta version of something).
Software requiring porting to work is hardly Windows specific. Most Linux software needs to make makefile changes to support OSX, and occasionally even changes to the dependencies. That a project is not tested or verified or designed for Windows is not a fault of Windows, it would be a fault of the project. But don't try to complain to the project, because they will typically just tell you to do it yourself and file a pull request. The developer mindset is much different between the two systems.
On Windows- Developers work with source code and compile programs. users don't. They run and use software. They don't need compilers or Software Developer Kits.
On *nix systems, the entire system is more or less designed around the customization that is possible by writing C programs, and writing your own programs. Most systems come with a lot of pre-installed developer tools. These tools are themselves useless to your typical user; it is that the installation process for software tends to compile from source when installing packages that lends itself to these tools being omnipresent.
Most development for OSX software occurs in XCode because realistically XCode is the only way to create native OSX Applications. In fact, from what I can tell, you cannot run software for Linux on OSX. If you are lucky you might be able to compile and run it, but that seems to be the exception rather than the norm. Given the same core designs and tools it is probably easier to get a software product designed only for Linux to run on OSX, though.
On Linux I find development of any semi-serious software product to be a nightmare. oftentimes even using a product can be a time-wasting endeavour.
Some time ago, I upgraded a system I had from Linux Mint 10 to Linux Mint 12. On Linux Mint 10, I was using a piece of software called "Desktop Drapes" in order to have a slideshow feature on the system. It worked.... it didn't work on Linux Mint 12. Searching about it it was because the Desktop Environment was changed. I searched on the issue further and I found that there was a plugin for Drapes to add support for Gnome 3. So I installed it. Drapes started and claimed to be running but the desktop wallpaper never changed. There were no logs to read. Searching further, I found there was a patch available for the plugin that fixed a problem with the plugin. Unfortunately, after trying it, I then discovered that the patch itself didn't work, and I needed another patch to apply to the first patch before I applied the patched patch to the plugin.
At that point, I decided not to use Desktop Drapes, and ended up spending a weekend constructing a python script to do the job manually by calling the gsettings program.
Does that same script work on later versions of Linux Mint? I don't think so. I'd be surprised if either Cinnamon or Mate had gsettings and it worked similarly.
Realistically the greatest obstacle to writing software on Linux is simply because of how much different other software could be running on the system providing capabilities you need. With Windows- you know a lot of stuff that is running, so you don't need to consider "well, what if they have a different registry handler" or "What if their desktop environment doesn't support per-pixel alpha-blending of device independent bitmaps?" That sort of thing. With a Linux application in order to be robust and run on more distributions you either need to restrict what your program does- keep it in the CLI, primarily- or you need to support the different cores that each distribution may have for the added features you want.
Really they just work different. Linux is more for the type of person who wants to customize their system a lot, and doesn't mind doing a lot of research, writing a lot of scripts or submitting pull requests or running patches to try to get that feature. And I won't deny it's pretty cool to have your own specially configured system after doing so (which is why I was writing a few scripts at the time). But it's not better or worse. For me I prefer the Windows way because when I'm not working I prefer not to need to screw around with other people's source code too much, unless that is part of my goal, nor do I find that Open Source has any specific value to me as a user since even though I'm a programmer I don't have the time to go and audit every single piece of software I run. Different strengths for both, resulting in people with different preferences.
As to the topic? I think it has a LOT more to do with the development of the software you run, and far less to do with the OS. Its sort of the same deal as porting a Console game to PC and having it run terribly; nobody says "ah, that is because the PC is a terrible platform"; they rightly know that it is the fault of those who ported or constructed the port of the game to the PC. Same applies between different operating systems. Trying to use "*nixisms" in a Windows environment during development can cause performance problems. Assuming certain things work a certain way because they worked that way on the OS you are familiar with can lead to difficult to trace performance issues or bugs. That sort of thing. If a piece of software runs worse on one operating system versus another, it is not the fault of the Operating system, but generally the fault of those responsible for the software. There are some exceptions I suppose but performance wise such disparate performance between the same software on two systems is likely a result of disparate effort put into the software running on those two systems, and not caused by one of the Operating Systems being more or less 'efficient'.
Jun 25, 2015Posted in: Computer Science and Technology
For an analogy, think of a processor as a room with 4 ovens in it (quad core/oven heh).
You can bake a cake in one oven, or bake 4 different cakes in different ovens in the same amount of time, but cannot bake just one cake in all 4 at the same time to finish it in 1/4 the time. Having more ovens lets you do more things at once, but not one thing 4 times as fast.
Probably not 100% accurate, but works for the purpose of explanation.
Feb 11, 2015Posted in: Computer Science and Technology
Yes, it's much more noticeable at 4K (due to the higher VRAM requirement, obviously). After a bit of reading it appears that the issue doesn't affect the 980 though.
In regards to why the issue occurs, the reference Maxwell architecture divides its 4GB VRAM into 8 512MB VRAM blocks, each controlled by one L2/MC unit. In order to maximize speed, every piece of data written to the VRAM is split into 8 blocks that are sent to the 8 L2/MC units to be written simultaneously. In the GTX 970, the 7th L2/MC unit is actually responsible for 2 of the VRAM blocks which means that it takes twice as much as the other units to write, leaving them idle for 50% of the time.
To fix this problem, NVIDIA divided the GeForce GTX 970 memory to two pools: a 3.5 GB pool and a 0.5 GB one. The first pool is controlled by 7 L2/MC units working in parallel and hence it operates at full speed while the second pool is controlled by a single L2/MC unit which means that its access speed is one seventh of that of the larger pool, therefore slowing down performance considerably when a program/game needs to access more VRAM than the first 3.5 GB pool can provide.
Edit: Also, wait for Pirate Islands (Radeon R300 series). Rumoured for a Q1 2015 release.
Oct 5, 201413thMurder posted a message on Buy High Quality Jewelry,necklaces,gemstone,engagement ring,make woman friend very brightThis will give me a woman friend? Wow, one jewelery, please!Posted in: Off topic, testing and misc. chat
Feb 20, 2014The focus on what language is good for writing games is about as useful as focusing on what kind of hammer is good for murdering pedophiles. Fact is that 99.9% of real-world hammer usage is not used for murdering pedophiles and therefore focussing on desirable hammer traits for that purpose will not give you a hammer that does any good for other tasks that use a hammer.Posted in: Computer Science and Technology
Feb 19, 2014Posted in: Computer Science and TechnologyQuote from Fred2000789
There are libraries and game engines. You wont have to touch low-level things.
Well C++ can run on more places than Java, and I'd say C# is pretty cross-platform too.
That's not what I was getting at.
C++ is not a good place to start because it's a terribly bloated language that has a lot of confusing features. At similar skill levels between both languages, it is almost always faster to develop something of equivalent functionality in Java than it is in C++.
I think perhaps one of those cumbersome things I encounter in C++ is just how files usually end up organized. At work one of the more annoying things I have to do in C++ is add a member variable because it requires modifying two separate files: the header and the source. If you want to avoid that you end up inlining everything into header files and if the project ends up being any reasonable size, you'll start getting irritated by build times, because just about any change results in a full rebuild.
Even build times on the order of 10s of seconds get frustrating when you're trying to debug problems where you're constantly making changes and then rebuilding and running the code again to see if the problem's been fixed.
Speaking of debugging, the support for C++ debugging is not terrific. I'm not sure how it works in Java, but debugging things line-by-line in C# is an absolute breeze compared to C++. If I want to debug something in C++, I have to do a debug build, which usually means a full rebuild that ends up taking significantly more time than a release build. The resulting code will then have horrendous performance which can be extremely frustrating if you're waiting for it to load resources in completely un-optimized code.
Which, by the way, is what gives C++ its performance: compiler optimizations. This is something that C# and Java don't really have to a significant degree. Those languages leave optimizations to the JIT, which can perform some optimizations that a compiler can't, because the JIT has access to runtime behavior where a compiler can only reason statically.
If you want to develop a game faster, then use C#. I'm telling you that Java isn't the right tool here.
Except there's only so much you can do on a virtual machine.
That makes no sense whatsoever.
I'd say C++. But we still know close to nothing about what you want your game to be.
C# would be my next choice.
For beginners there's not going to be a significant difference in effort between Java or C#. Most of the additional features that C# provides are something that a beginner probably wouldn't need to deal with to a significant degree anyway (and anyone asking what language to use is a beginner).
There is plenty of stuff you can do to optimize even in a virtual machine environment (which is mostly a non-issue). If you want good performance, pick good algorithms, don't fret over what language you're going to use. And don't under any circumstances break the two rules of optimization:
1. Don't do it.
2. Don't do it yet.
Optimization comes last and only when and where it's needed. Premature optimization is the tool of the devil.
Quote from Fred2000789
Every language can create a game. Whether the game will run at a desired rate is another matter entirely.
Things Java can't do that C++ can:
inline assembly (Not used by me, but some do)
Easy access to binary data
Inline assembly is not a feature of C++, it's a compiler extension and it ends up being different depending on which compiler you're using.
Technically C# has something similar since it provides you with the ability to emit IL at runtime (basically letting you write compiled at runtime). Similar things exist for Java.
I also don't know what you mean by "easy access to binary data". Reading and writing binary data is pretty similar between all languages since they typically abstract it away as streams. I guess about the only difference is that C++ can reinterpret_cast a pointer to an arbitrary object and then screw with the bytes. I wouldn't really advertise that as a feature because it's really just an example of a lack of type safety in C++.
Also C++ doesn't technically feature any actual memory management features (perhaps with the exception of smart pointers nowadays). The programmer handles all memory management in C++. Java and C# handle memory management for you since garbage collection is kind of a core feature of both languages.
- To post a comment, please login.