Some thoughts

i was wondering around Game Development Stack Exchange, and stumbled upon this question, Is Java viable for serious game development?



There are a lot of old school talk, like slowness, lack of game dev focused lib, doable but with bad graphics, no console yada yada yada…but, one thing was different that I hasn’t seen a lot, even in jME forum, I am quoting the answer here, (notice the selected part)



http://i39.photobucket.com/albums/e179/iamcreasy/226358.png



So java has the following issues when it comes to game dev,

#1 : Accessing GPU buffers

#2 : Non-deterministic GC

#3 : What non-indie game doesn’t do



What’s the take on these perspectives? Do they still persist or fixed or have to find an workaround?

That was written by a DirectX programmer probably… where you have to manage every little byte yourself.



Most of those threads are a waste of time because they come immediately from the assumption that the only games are “cutting edge triple-A games”… which is essentially not the case. In reality, the most profitable games are flash games written for Facebook.



If you want to boil those arguments down it always comes back to one thing: “We’ve always used C++ so we’ll continue using C++.” Everything else is just window dressing.

1 Like

The indie game argument is hilarious, though. Translation: “The only people who make successful games in Java are the ones who care about their time, sanity, and have to keep costs realistic.” Wait, what? Sounds like win all around to me. The corollary would be “We like C++ because it makes our job harder and we have to hire huge teams of specialists just to get anything done.”



I don’t see any other way of interpreting that.

a) Most AAA games use virtual machines too (UnrealScript, hello?)

b) GC can be a drag but optimizing it finally is much less time spent than hunting C++ memory bugs, also see a)

c) Getting data from GPU and back is always an issue and doing it with java isn’t much slower. Compare Ogre3D and jME, virtually no difference.



Basically UDK does exactly what we do: provide the raw hardware power via a unified and easy to program environment that binds it all together. Unity has even worse performance issues due to various intermediate stages though its “native”.

slow down guys :slight_smile: I was just asking if he is correct form the technical stand point.


@normen said:
a) Most AAA games use virtual machines too (UnrealScript, hello?)
b) GC can be a drag but optimizing it finally is much less time spent than hunting C++ memory bugs, also see a)
c) Getting data from GPU and back is always an issue and doing it with java isn't much slower. Compare Ogre3D and jME, virtually no difference.

Basically UDK does exactly what we do: provide the raw hardware power via a unified and easy to program environment that binds it all together. Unity has even worse performance issues due to various intermediate stages though its "native".


a) if I am not mistaken, Unrealscript doesn't get compiled, its interpreted. Correct me if I am wrong.
b) How do you optimize a non-deterministic GC? Sorry, I don't know anything about it, just asking if you can provide some insight.
c) He wasn't saying 'slow'. What he was saying about GPU buffers are, "Java does not provide a language feature which will aid you in ensuring they are correctly locked and unlocked or disposed of, which C++ does." So, what's the correct technical standpoint? Does java can or can't do this?

The premise is broken.



You can count on the GC: don’t allocate more memory than you need.



But it’s beside the point, non-deterministic GC does not mean you can’t write games. It only means that you can’t write the next triple-A first person action Doom remake using the latest and greatest tech.



Regarding: "c) He wasn’t saying ‘slow’. What he was saying about GPU buffers are, “Java does not provide a language feature which will aid you in ensuring they are correctly locked and unlocked or disposed of, which C++ does.” So, what’s the correct technical standpoint? Does java can or can’t do this? "



OpenGL is handling that for us. When you are used to programming in the DirectX model you have to worry about every nut and bolt and how far they are turned. When you see some other language that doesn’t have a wrench, you say “Impossible!” never realizing that nut and bolt turning are simply not required.



And when they are: we are back to the broken premise that games = “cutting edge triple-a game” which is entirely and provable a broken premise.

And, by the way, C++ provides none of that locking stuff either. It’s part of the API being used. Unless he’s referring to co-opting stack-based allocation and destruction to clean up locks. That’s the only language feature I can think of that helps with locking. And it’s at best only an idiomatic difference.

@pspeed said:
You can count on the GC: don't allocate more memory than you need.

But it's beside the point, non-deterministic GC does not mean you can't write games. It only means that you can't write the next triple-A first person action Doom remake using the latest and greatest tech.


What is the issue while working with non-deterministic GC?

@pspeed said:
OpenGL is handling that for us. When you are used to programming in the DirectX model you have to worry about every nut and bolt and how far they are turned. When you see some other language that doesn’t have a wrench, you say “Impossible!” never realizing that nut and bolt turning are simply not required.


So, his whole argument is based on the lack of understand on what's happening on the other side. I get it now.
@iamcreasy said:
What is the issue while working with non-deterministic GC?


Because the kinds of people who play those games are the kind who will take it back to the store if it pauses for even 1 second while they are running around and fragging people. The real time requirements for a multiplayer twitch game are pretty severe... though really I think GC is the least of their worries. To be competitive they have to constantly one-up the next guy. So they pour $100 miilion dollar budgets remaking the same games over and over with different/better/faster graphics and a little bit different game play.

Then there is every other game on the planet where these things don't matter that much and "time to market" and "limited budget" are much more critical. "But we've always done it this way..."

I haven't done the research but I would not be surprised to learn that most successful games are not written with C++. This list would have to include all web/flash games, phone apps, etc.. I suspect C++ is not such a big percentage in that light.

Anyway, over the next 4-5 years, what we will see is more and more Java indie games getting closer and closer to the quality of triple A games... at least as far as the user is concerned. Then the only difference will be the huge budgets for voice/motion actors, artists, etc.. Meanwhile, indie devs will produce more games in less time and do the really interesting game mechanics and ideas stuff.... that the triple A studios then use for their next big budget thing.
2 Likes

What he said plus the VM is also exchangeable, remember. If you’re really after it you probably still pay less for a realtime VM (yes, there is such things, no pauses at all at the expense of some more mem, mainly for high-frequency machinery control) than for a game SDK.

1 Like
@pspeed said:
Because the kinds of people who play those games are the kind who will take it back to the store if it pauses for even 1 second while they are running around and fragging people. The real time requirements for a multiplayer twitch game are pretty severe... though really I think GC is the least of their worries.


I was looking for some technical insight, why a non-deterministic GC would be bad for a game. I get one point that, the real time requirements for a multiplayer twitch game are pretty severe, but what this has to do with non-determinism?
@iamcreasy said:
I was looking for some technical insight, why a non-deterministic GC would be bad for a game. I get one point that, the real time requirements for a multiplayer twitch game are pretty severe, but what this has to do with non-determinism?


Really? You can't predict when the GC will run so you can't predict when it will destroy frame rate. What part of that isn't clear?
1 Like

Simple example -if you are doing an FPS game with matches at max 15 minutes long then ideally GC would not run at all during those 15 minutes - except when you died and were waiting to respawn or game had finished and you were back to the menu.



In those two cases a GC pause isn’t noticed by the player. On the other hand though a GC pause in the middle of an intense firefight that caused the player to die would very definitely be noticed.



But we have no real control over when the GC executes.



This is mitigated a lot by modern JVMs and well written programs though, I know I’ve not seen GC pauses causing serious problems either in games or non-games programming I’ve done. (Including financial market analysis software which is absolutely required to never miss an update and went through a lot of testing and verification to make sure it didn’t).



The real reason most triple a games are written in C is because most triple a games are written in C. Which means the tools and developers are all experienced in C…which means that the next project also gets written in C…and the loop repeats.



Indie games though break that cycle and hence tend to be written using whatever technology the developers are most familiar with/prefer. I used C & C++ for years, but I know I wouldn’t want to leave Java to go back to them unless I had a really good reason!

1 Like
@zarch said:
This is mitigated a lot by modern JVMs and well written programs though, I know I've not seen GC pauses causing serious problems either in games or non-games programming I've done. (Including financial market analysis software which is absolutely required to never miss an update and went through a lot of testing and verification to make sure it didn't).


Agreed. After I fixed the JME update loop not to try to free all dereferenced GL objects at once, I no longer notice GC pauses at all in Mythruna. The generational/progressive garbage collector is doing a fine job.

@zarch said:
The real reason most triple a games are written in C is because most triple a games are written in C. Which means the tools and developers are all experienced in C...which means that the next project also gets written in C...and the loop repeats.


Exactly.

@zarch said:
Indie games though break that cycle and hence tend to be written using whatever technology the developers are most familiar with/prefer. I used C & C++ for years, but I know I wouldn't want to leave Java to go back to them unless I had a really good reason!


I developed C professionally for many years and did contracting work in C++. I've been doing professional Java development since 1998. No serious C/C++ application's development schedule was shorter than a year and no serious Java application's schedule has been longer than 6 months.

In one case as a contractor we went through an effort to fix memory errors in one application. This involved running Purify against it and tracking down all of the issues. It took us (a team of 3) 2-3 months or so and that was just tracking down and fixing the issues that could be identified by a tool like Purify.

I'm not really joking when I say that you can develop a Java application in the same amount of time that it takes you to track down and fix all of the memory issues in a similarly scoped C/C++ application. That has been precisely my experience in 22 years as a professional software developer. I've relegated C and C++ to the same place I used to put assembly language... a legacy relic useful for when you need hand-rolled performance on some specific piece of hardware.
1 Like

The guy who wrote that comment needs to be garbage collected.

2 Likes
@pspeed said:

I developed C professionally for many years and did contracting work in C++. I've been doing professional Java development since 1998. No serious C/C++ application's development schedule was shorter than a year and no serious Java application's schedule has been longer than 6 months.

In one case as a contractor we went through an effort to fix memory errors in one application. This involved running Purify against it and tracking down all of the issues. It took us (a team of 3) 2-3 months or so and that was just tracking down and fixing the issues that could be identified by a tool like Purify.

I'm not really joking when I say that you can develop a Java application in the same amount of time that it takes you to track down and fix all of the memory issues in a similarly scoped C/C++ application. That has been precisely my experience in 22 years as a professional software developer. I've relegated C and C++ to the same place I used to put assembly language... a legacy relic useful for when you need hand-rolled performance on some specific piece of hardware.


Sorry pspeed, still getting used to this android tablet and it decided to double post your quote when I hadn't even started typing my reply yet!

Your experience is very similar to mine as I also came from years of C & C++: I can write software in Java in 1/3rd the time to write an equivelant in C and the java application is equivelant performance and more reliable.

My most recent java contract did take 9 months (3 months, extended twice...then back again part time for 5 weeks)...but that was mostly because the end users had very specific requirements but not quite so specific specifications so we spent a lot of time going around the tweaking cycle at each stage of the project. I'm just glad everyone uses agile these days as waterfall would have doubled the timescales again!
@androlo said:
The guy who wrote that comment needs to be garbage collected.


HAhahahaahahah LOL so hard over here! I thought the same thing!

I think the only thing Java lacks is respect.
3 Likes
@shirkit said:
I think the only thing Java lacks is respect.


I like that. I have now added it to my personal phrase lexicon.
@zarch said:
My most recent java contract did take 9 months (3 months, extended twice...then back again part time for 5 weeks)...but that was mostly because the end users had very specific requirements but not quite so specific specifications so we spent a lot of time going around the tweaking cycle at each stage of the project. I'm just glad everyone uses agile these days as waterfall would have doubled the timescales again!


Yeah, I have Java projects that span years, too... but the initial version always rolls out in less than six months. In fact, usually much shorter when we do scrum style development. Something about the thought of 4 week sprints on a c++ project makes me hyperventilate a little.

Well there are two things taht should be kept in mind:


  1. With gcj you can crosscompile java to c to executable.
  2. There are some JVM that use a special garbagecollector (http://www.azulsystems.com/products/zing/whatisit none pausing)
  3. There are also jvm possible (you just need to modify the standart garbagecollector and recompile openjdk for example) that only collects on request.
  4. With jdk7 (hotspot runtime) you can set a maximum allowed time that it tries to archieve at a best effort approach. So usually if you are not collecting like 400mb in 60byte objects at once, you are fine.
2 Likes