I am working on an interface library that needs to be autonomous from the game code, so it can be used in multiple projects. I am trying to figure out how to capture -only- input meant for the interface.
Some examples:
When I click an element in the interface, the element responds and processing should stop there. If underneath that element is something clickable in the scene, it should not respond.
When I don’t click on any element of the scene, the input code of the scene should take over and determine if and how to respond to whatever the user is doing with the mouse.
When the interface is in text editing mode, every keyboard hit should be processed by the interface code. But when it is not in text editing mode, some keys are shortcuts to be processed by the interface (Tabbing between elements), others need to be handled by the game (Escape to return to the main menu).
With JS and the DOM every element can have it’s own event handler and you just cancel the propagation of events to elements beneath the one that decided to respond to the event. In JME3 things work differently, but it is a code pattern I had in mind.
Or in pseudo code:
Capture input event
IF
event hits interface RESPOND
Block event from bothering game
ELSE
turnover event to original game input-handlers
In the end, my main objective is that the input handling code of the interface should be self containing and should be transparent to the game.
Any ideas for an code architecture would be welcome.
You could take a look at how Lemur does it… because it does all of those things.
Also, the Lemur picking stuff is usable without using any other parts of Lemur because it’s designed in a modular way… so in theory, you could just reuse the picking stuff. (Bonus is that you’ll get multitouch for free on platforms that support it… for example, it’s really cool to be able to drag two different slides with two different fingers.)
…but if nothing else maybe it can give you ideas.
Edit: note that mouse handling can get really complicated really fast. For example, you’ll also need to come up with some mechanism to make button presses sticky so that if the player drags too quickly they don’t lose the slider button or whatever they are dragging.
Like everything else in Lemur, it looks very sophisticated. Which is a euphemism for ‘too complicated for me’. I don’t think I am looking for the flexibility Lemur achieves.
I am looking at RawInputListeners now. As I understand it, they are dealt with before the mapped input listeners and you can stop an event from propagating by calling setConsumed().
There are two things that are not clear to me: can I add multiple RawInputListeners to InputManager? And if so, how is the order in which each of these listeners is executed be determined?
I do something like this for my game, and managed to do it without having to go to the RawInputListener level, and this approach should be able to be somewhat self containing (the game would know about the InterfaceManager, but the InterfaceManager doesn’t need to know anything about the game)
I have a class titled InterfaceManager and whenever an interface is being displayed, it gets added to a list in that class .
Then I have a method in that class titled isInterfaceBlockingInput(), and that method has code that iterates through the list of currently displayed interfaces to determine if they should be blocking the input or not.
So if that method returns true, I ignore input in my class for handling normal game input, and instead the InputManager sends the input to the currently displayed interface that the mouse is hovering over or that is focused.
So the InterfaceManager never has to know about the game input, and it could theoretically be used independently as a separate library (the game input just has to have a reference to the InterfaceManager in order to check if input should be blocked in the game)
You can add multiples. They are executed in the order that you add them… which can be kind of a pain for decoupled systems all using that feature.
If you are coming from trying to decipher the code… then maybe.
But if you just want to use it then it’s pretty straight-forward:
So if you don’t already have Lemur’s globals initialized (because you aren’t using the rest of Lemur) then:
public void simpleInitApp() {
appStateManager.attach(new MouseAppState(this));
}
After that the regular MouseEventControl and CursorEventControl stuff works. Already auto-wired to the GUI viewport and the scene.
The thing is, these problems always start out simple but then a hundred little things make it more complicated. Like the aforementioned “target capture” to avoid angering users who dragged too fast. Another gotcha is if you need to switch between “mouse visible” and “mouse invisible”… for simple apps this is straight-forward but for any decoupled parts of code, interaction gets tricky. (Something Lemur solves with enable request counting.)
Lemur is my third generic picking system based on JME and maybe the 12th one I’ve written in my life. It is not perfect but in my opinion it’s the cleanest way to do picking in JME. (It’s lingering issues are not with the approach itself but with some details in how it handles the aforementioned target capture in the face of consumed events, etc… It’s a complicated problem that Lemur only 90% solves.)
All of that being said, you will learn a ton if you intend to continue carving your own wheel.