So, after months of research with my University on autism(I also have a parent and brother with autism), choosing what game engine to use(this took a while, had previous knowledge in java and some java game-dev by hacking runescape though hehe), learning how to model in blender (which I still suck at so if anyone would like to offer assistance I’d appreciate it), learning how to use JMonkey (thank-you norman, your tutorials fired ahead to new speeds), I finally had my first workable demo to show the University. Many of which were very impressed!
So the project: You play as someone with a neurological condition (at the moment, autism, but will include adhd and others) and get to experience some of the obstacles faced through their eyes.
The project goes on for another year :). After which i’m heavily debating making it open source - at the very least parts of it will be because I want people to be able to create their own scenario’s they find difficult and drop them in, and let other people play them (this part is already mostly written too).
So, some pictures (more pictures and explanations can be seen at this site:
I want to give a big thank-you to all you JMonkey team, specifically Norman (hope I spelt it right this time) for your help and support. You guys must be really busy and yet you still take out time to help with tutorials, and answering questions. Also thankyou T0neG0d for your sick GUI and also help. I hope over the next few months I can begin to start giving back to the community :).
@avpeacock Wow, really cool!
My Ms. works with children with autism. I will forward it onto her. And if you would like feedback, she would be able to get some coworkers/parents to try it out.
@Sploreg said:
@avpeacock Wow, really cool!
My Ms. works with children with autism. I will forward it onto her. And if you would like feedback, she would be able to get some coworkers/parents to try it out.
I would love feedback! If you could do that, that’s be awesome and i’d be so grateful! I need to write this up though and get report in for next week =(. After that though i’ll be providing an online/downloadable demo with an online survey. I want to make this as good as possible!
It will reshape the color portion of the indicator to match the outer shape.
Ohh, thankyou!! Yes there’ll be playable demo once i’ve got the applet stuff working (for some reason it’s not) and fixed a couple of bugs. Should be available in a week’s time.
@avpeacock said:
Ohh, thankyou!! Yes there'll be playable demo once i've got the applet stuff working (for some reason it's not) and fixed a couple of bugs. Should be available in a week's time.
Hi guys. So, a year on and the project underwent a complete re-write and change. Mostly because I became much better at modelling and also to deal with point lights/scene object counts etc.
I’ll be releasing it hopefully in a few days.
However, here is the site thus far for some pictures! :). I’ll be adding more stuff, like a big thank-you to you all.
Hi. This looks awesome! The research for my thesis involves as a cornerstone an indoors simulator. Any chance we could get in touch for details/possibility to build on top of your work? Cheers!
@szanto.karoly said:
Hi. This looks awesome! The research for my thesis involves as a cornerstone an indoors simulator. Any chance we could get in touch for details/possibility to build on top of your work? Cheers!
Sure. I don’t think i’ll be open sourcing it for a month or two though, but i’m more than happy to help/give information and assistance. We can discuss anyway =). It would be nice for things to be built upon rather than starting from scratch. I’ve actually programmed this in a very modular manner so people can drop in and out scenes and scenarios, so it could work very well as a platform for you. But it does require better documentation etc. I don’t have certain actions like picking up objects etc at the moment though.
What exactly is your thesis on? Why an indoor simulator?
I’m developing a body-centric, context aware environment simulator. It is meant to categorise objets around the agent into various sets based on: proximity, visual spectrum and tactile (picking up / dropping down things). This is it for now, as my time is drying up (2 month left) :). In essence, it is support for a theoretical work my supervisor is carrying out.
So, given the limited time, for me it would be really useful to have an indoors scene (a home, populated with everyday objects) so I can write my categorisation algorithms upon. Besides the scene, the whole first-person avatar logic is also needed.
I’m interested in building upon the API offered by the JME as it covers some of the stuff out of the box: proximity and visual spectrum (camera orientation). So, I’m not keen on investing time into modelling software. That can be target of future work, for a follow up thesis.
If you think you can help me out boost the progress of my work, please do let me know
@szanto.karoly said:
I'm developing a body-centric, context aware environment simulator. It is meant to categorise objets around the agent into various sets based on: proximity, visual spectrum and tactile (picking up / dropping down things). This is it for now, as my time is drying up (2 month left) :). In essence, it is support for a theoretical work my supervisor is carrying out.
So, given the limited time, for me it would be really useful to have an indoors scene (a home, populated with everyday objects) so I can write my categorisation algorithms upon. Besides the scene, the whole first-person avatar logic is also needed.
I’m interested in building upon the API offered by the JME as it covers some of the stuff out of the box: proximity and visual spectrum (camera orientation). So, I’m not keen on investing time into modelling software. That can be target of future work, for a follow up thesis.
If you think you can help me out boost the progress of my work, please do let me know
So effectively you just need a house model? Btw, google sketchup was pretty good at helping to find some stuff. For the prototype(the images you see in the post) the house was all together so you could freely walk between rooms. I did still need some minimal modelling skills though, cgcookie which has video courses/tutorials was awesome; learnt a lot of modelling in literally 1-2 days which made the process of using jmonkey and models much, much easier. It was extremely difficult to find stuff already done, at least when I was doing it, there’s a few better things about now.
if it helps, most of the work i’ve done was in the last 2 months including better blender learning - 2k lines of additional code, lots tidied up too.
The prototype(see the pictures) entails a house which is all together but really is shit. The second version which is on autism-simulator.com has separate scenes but is much better.
The problem is, I don’t think I’m able to give you the latter model until my dissertation is complete and handed in(april). I may be able to give you the former before handin but i’m not sure right this moment. What is your time plan owing the 2 month period? You may be able to skip on the visuals to start with by simplying getting a bunch of household objects on blendswap and showing them on an empty scene with some terrain. Walk around the terrain and do all the categorization until a more suitable alternative is found (if this is all you needed). That would get you started and learning quickly I think.
Sorry, that’s a lot of waffle. Not slept in a while
So basically my main work, and target of evaluation, will be the categorisation algorithms, as you’ve well spotted out. Therefore, I’ve already set out on the way you’ve described: I’m creating a scene with a floor walls and various boxes around. I’ll be adding collision and FP support. I’ll make all of this programatically.
Once done, I’ll write my algorithms on top of it. I hope to achieve this by the end of the week.
If everything works well, I’ll try to import some ready made blender scenes and test my algorithms on top of those.
If you consider open-sourcing your scene, I it would be helpful for my evaluation. It would prove my algorithms work with other models as well! And, in essence, it would be a strong evaluation - a model written with a totally different purpose!
@szanto.karoly said:
So basically my main work, and target of evaluation, will be the categorisation algorithms, as you've well spotted out. Therefore, I've already set out on the way you've described: I'm creating a scene with a floor walls and various boxes around. I'll be adding collision and FP support. I'll make all of this programatically.
Once done, I’ll write my algorithms on top of it. I hope to achieve this by the end of the week.
If everything works well, I’ll try to import some ready made blender scenes and test my algorithms on top of those.
If you consider open-sourcing your scene, I it would be helpful for my evaluation. It would prove my algorithms work with other models as well! And, in essence, it would be a strong evaluation - a model written with a totally different purpose!
So yeah, I’d be excited to cite your work
I’m happy to open source everything I have. The issue isn’t “if”, it’s “when”. When is your deadline? If I can’t open source what I have I may be able to help with building you another scene.
Hi. I’m trying to identify objects that are currently visible to the camera. I have two objects, a white and a green cube. The green cube is further on the Z axis from the camera than the white cube. I am trying to identify the visibility of objects with Spacital.checkCulling(cam).
In the screenshot above you can see both objects are visible. Hence both are detected to be visible. Works great!
In this second image my character is very close to the white cube so that it occludes the green cube. In this case the algorithm does not work as, again, both objects are detected as being visible!
JME usses frustrum checking, it just determines if an object might be visible based on the viewport + boundings of objects. (As everything else is only usable for a few usecases)
There is absolutly no occluding calculation, you need to do this in user code if it is necessary.
Depending on the boject count, and necessity, you might be able to do a workaround, with a secondary hidden viewport, where you render each object in a seperate color, and then scan the image for the colors, so basically letting the z buffer determine what is visible.
@szanto.karoly said:
Hi. I'm trying to identify objects that are currently visible to the camera. I have two objects, a white and a green cube. The green cube is further on the Z axis from the camera than the white cube. I am trying to identify the visibility of objects with Spacital.checkCulling(cam).
In the screenshot above you can see both objects are visible. Hence both are detected to be visible. Works great!
In this second image my character is very close to the white cube so that it occludes the green cube. In this case the algorithm does not work as, again, both objects are detected as being visible!
Any thought on how to make this work correctly?
Thanks!
I’m out atm; but I know this question has been asked a few times before because I had the same issue and had to search for it :). I’d generally suggest making new topics about problems you’re having in the appropriate section, you’ll get a faster response! But do tag me in posts if you think it’s something specific I can help with.
The way most games do this is by doing a raycast from some points at the outside of the object to the camera and checking if it collides with any other objects on the way.