Code verification process for jME3

I have some questions regarding the code verification process for jME3.



I am interested in the testing process, and how that ties into your nightly builds (and also stable releases). I see that you provide manual tests for various features (here). I also found a scarce set of JUnit tests (only 2, found here). I might have missed something, if that is the case, please point me in the right direction.



From this I would deduce that you only perform manually testing of the code, I assume by the commiters of the code, or perhaps a peer review process? How does this affect the nightly builds? Would an error just lead you to rolling back to last working build for the nightly, or do you release “non functioning” nightlies?



I am aware of some of the difficulties with testing code that in many cases are of a graphical nature, however that should not be a justification for such a scarce set.



I am eager to hear your responses to this.







P.s: If this have been answered before I am sorry, please direct me to the appropriate thread if that is the case.

I see I managed to mess the links up, and I can’t find a edit button :confused:



Anyway here are the links:

Making Junit test case is sometime longer than developing the feature itself. And as you mentioned it wouldn’t guaranty anything because we have mostly graphic features to test.



I know this looks like we are lazy asses, but we have the best Test strategy IMO : users.



Any issue introduced in the SVN is reported here within the 24 hours after it has been committed, and usually fixed not long after that.

It’s like cosmic balance…

A lot of users play the game by using the nightly builds.

If you don’t want to be annoyed with bugs, use the stable version.



We do that in our free time, and we’d rather spend it making useful new features than making Junit test to test the old ones.



And the fact is…we don’t need automatic testing, we have monkeys to test.

Its more “apple like”… We let people commit after we have an idea about how they code, then scrutinize their code after they committed and then decide if we want it or not. Before we move code to stable most testing indeed happens through manual testing and feedback from larger projects and the general “nightly user base”. Theres no automated testing apart from the actual build process of the SDK which in itself is a pretty good test for the engine and its compatibility :wink: Internally (core team) theres seldom any unclear situation about what happens how and when and who does it. If somebody commits, everybody is basically all over it instantly anyway.

It’s difficult to unit test a graphics engine or UI in general. Very difficult and very, very time consuming. In my past experience with unit or integration testing UI you spend more time maintaining the tests than the code, and often the tests don’t end up working that well.



Stable releases, such as Beta, alpha 4, etc. are tested by the core members. But also by the community through their use of the nightly builds. The nightly builds are the latest, and hopefully greatest, and once we feel it is stable enough for a project it gets released. If you are using the nightly build, you are using code that is in development. Jme is in beta still, so it is still changing as more people adopt it and suggest feature changes.



As for verifying user contributions, the devs look at the code and integrate it if it is up to snuff, works, and is well designed.

I think users is a really good test strategy. I have to do the unit and integration tests at work using Tessy. It’s boring like hell and you don’t find the really interesting bugs that have to do with different interrupt levels and the like. But the TÜV (certification authority) wants us to generate 10 kg of test reports. I guess they take the reports, put them on a weighting machine and if it says “> 10 kg”, you get your certificate. :smiley:

1 Like

Thank you for the quick answers! I was interested in your input on this, and I feel that you present valid answers. As survivor pointed out it’s rather boring, but it’s what the suits want. It does have it’s uses however, and there are parts that are easier to test than others. But I’m not going to argue how and what you do, I was merely seeking answers. It seems that you have a good symbiotic relationship between contributors and testers.



So again, thank you for your responses, it was good to get your take on this.