Using raw OpenGL?

Is it possible to use raw opengl in jMonkeyEngine? For example, if i want to simply draw lines with opengl, how would I do that in the engine correctly?

Basically I am interested in being able to make generated graphics…

You can create a post processing filter or render pass, load the lwjgl classes you want to access directly and then write to a textures byte bufffer, and blend them using a shader… Probably other ways of doing this, but this works

Well where is the sense of using a grafic/game engine if you want raw access? Usually you can do anything over the jme interfaces you can do manually, and doing stuff manually has a high risk of breaking jme as the manual state changes are of course not recognized by the engine.

I just hit a similar issue where I wanted to perspective transform an image and add it as a sub texture to the current texture. JME doesn’t provide a method for manipulating images in this way without attaching them to a material, to a geometry and rendering them through a camera for perspective. Unless I missed it (which I hope I did tbh)…

why is it you want to do that ?

  1. jme allows to create custom meshes easily in 1 line of code.
  2. jme allows to create custom textures with 1 line of code, just convert buffered image into a texture.
@t0neg0d said:
I just hit a similar issue where I wanted to perspective transform an image and add it as a sub texture to the current texture. JME doesn't provide a method for manipulating images in this way without attaching them to a material, to a geometry and rendering them through a camera for perspective. Unless I missed it (which I hope I did tbh)...


You can add geometry that is not transformed by JME. You may need to tell us what you are actually trying to achieve instead of the ways you've tried that didn't work. Because I'm not sure I understand what you mean.

Why wouldn't you want to attach the image to a material and a geometry?

I understand that the engine is intended to be used instead of everything of a lower level, but it just does not provide the ability to write something like Line(0, 0, 0, 10, 10, 10) and get a pain simple line… maybe I am mistaken. The solution for drawing lines that I am using now is such: I create a special node to which I create and attach all the lines, created by first creating a line, then a geometry and assigning a whole material to a single line and then attaching it to that node for display. Each cycle the node kills all its children so that the net effect is like drawing lines which are not persistent through the cycles but are refreshed each frame.



As for working with images, I also find that jME3 is somewhat limiting in that, unless you use plain Java things like BufferedImage and so. In order to work with textures, 2D and such, I am looking forward at integrating Processing with jME3, drawing with Processing to an offscreen buffer and then using it inside jME3 scenes. Those two libraries (or platforms) do perfectly complement each other. Not sure however, if the integration is simple or possible at all, but I will definitely give it a try after some while.

All easily possible, just modify the mesh and texture buffers… I mean, how else? If you want to use Processing (somebody already integrated it btw, search the forum) then what would be the point in having a similar api in jme already?

One mesh with 100 lines (that you update every frame) is, in general, going to be a LOT faster than 100 Line() calls.



As Normen says, there are OpenGL-friendly ways to do the things you want… and they are also doable through JME since it is, in general, architected to use OpenGL efficiently.

Ok, I will try creating the dynamic mesh as you suggest, it seems very logical… I just did not dive deep into the subject, but needed a quick 5-minutes solution. When finished prototyping, I will switch to the suggested dynamic mesh creation because then speed will be critical…



Also - about using Processing - it just allows for some neat image manipulations and image drawing done in a few lines of code because it is intended for such kinds of activity - it has very powerful 2D drawing facility. And if I, say, would like to create something with a generated or dynamic textures, then it will fit perfectly. However, its 3D is not very advanced… jMonkeyEngine suits there best. So I think that using them together could open more possibilities for creative and organic design.

As said, somebody already integrated Processing (yes, I know what it is), search the forum.

@pspeed said:
You can add geometry that is not transformed by JME. You may need to tell us what you are actually trying to achieve instead of the ways you've tried that didn't work. Because I'm not sure I understand what you mean.

Why wouldn't you want to attach the image to a material and a geometry?


I actually haven't tried anything yet... and I don't want to attach the image to anything... that I can think of. I'm hoping to take a small image and perspective transform it and then drop it into place in another image. I'm really not sure what the fastest approach to doing this would be, the idea is to avoid rendering a quad with a material and see what the difference in performance is like..

Oh.. I almost forgot to mention. I really feel like shaolong11 summed up the way we all feel about this... and many, many, many other topics. If we all just take the time to go buy stuff from china, we should find happy love joy in everything we do. This week... or even next month. Buy and be love. Or something...
@t0neg0d said:
I actually haven't tried anything yet... and I don't want to attach the image to anything... that I can think of. I'm hoping to take a small image and perspective transform it and then drop it into place in another image. I'm really not sure what the fastest approach to doing this would be, the idea is to avoid rendering a quad with a material and see what the difference in performance is like..


There are no low-level OpenGL calls to do this either, really. Can you tell us why you think you want to do this? (though it may warrant another thread) Because it sounds very strange. (Though we are at risk of hijacking this thread)

A texture quad/triangle is the single fastest thing your graphics card will ever render. You could render a quad (perspective transformed) to a texture and then use that texture on something else. If the textures can all stay on the GPU during the process then it will likely be faster than anything you could do on the CPU with manual image manipulation and then sending to the GPU.

No need to start a new thread. You gave me the all the info I needed. Thanks!