About "Shader" . How can we do the "multi pass" GL render in JME3

I 've tried to read back into JME , JME2 and found that in the past , we had “Pass” but here in JME3 , how can we do the same thing … Which an result texture of previous pass is the source for the next pass…

Example in my situation , I want to build a" multipass shader - editor" or a “shader composer” which can import shaders from AMD’s RenderMonkey…

Something that work in JME3 is:


→ is that the way we implement Pass , or we can do in another way, can you guy give me some direction.

If addProcessor is really the correct way, I may ask for further info about it:

1. Can we add many processors or just one only?

2. What is the order of Processor (s) after we add it to a viewport?

One more off-topic question:

3.In GL , they have Geometric Shader , and how about JME3?

Thanks for reading!

atomix said:
Something that work in JME3 is:

-> is that the way we implement Pass , or we can do in another way, can you guy give me some direction.

Yes, you should look into the different processors (PSSM or Basic ShadowRenderer, FilterPostPorcessor, SimpleWaterPorcessor).
They all have some pre render or post render passes.
But you should really look at the FilterPostProcessor and different Filter that are implemented in the engine, because, it seams that it's what you want to implement.
But honestly directly importing shaders from renderMonkey will be very difficult, for 2 reasons :
- Render Monkey supports GLSL to allow AMD to say it does....but honestly, it's really HLSL oriented and a lot of basic things does not work for GLSL (like you are limited in RGBA8 textures, textures dimension must be a power of 2...and so on...)
- JME 3.0 has a particular way of using shaders, through its material system, so a direct conversion seems a bit difficult.

atomix said:

1. Can we add many processors or just one only?
2. What is the order of Processor (s) after we add it to a viewport?

1. Yes
2. Processors has several methods that are called in the render flow :
- first the preFrame method of each processor is called
- then the scene graph is sorted and the culling computed and the different render queues are computed
- Then the postQueue of each processor is called.
- then the render of the viewport is done
- then the postFrame of each processor is called
the processors are process in the order you add them to the viewPort.

atomix said:
One more off-topic question:
3.In GL , they have Geometric Shader , and how about JME3?

JME3 has no support for geometry shaders for now, maybe it will be added in a future version (3.1?)

Well I hope so that we get geometry shaders, as they would kinda good fit into the material system as well.

→ just think of a wall out of stones with a heighmap manipulating the mesh directly instead of doing normal/parralax mapping for the same effect.

@EmpirePhoneix: you see how many people have problems with just OpenGL 2.0 already, from what I heard from Momoko_Fan it would become even more complicated with geometry shaders… Guess waiting until 3.1 with this will allow the whole thing to mature a bit before we add it :wink: Same accounts for OpenCL for particle emitters etc.

yes, I agree, with that., only option right now would be to have optional geomtery shaders and a global switch wether they are allowed or not. So the user could set the setting manually.

Btw another thing I wanted to ask you as you mention opencl.

First of all I made a double precision jbullet version if you need it,

secondly and more important would you like to help me optimize some of the jbulletalgorithms for opencl use if hardware supports it? I think it’s worth way more worth to use direct jbullet->opencl than jbullet->c+±>opencl as we only have the lwjgl dependency then.

Hm, I dont really agree on that… While jbullet is great and all, I’d like to get the developments of the bullet team directly into jME3 instead of coding it again myself :wink: While the native might be a bit uncool, jME3 handles them well and bullet is really very cross-compatible, even on android.

Hm ok I never checked how good bullet is cross platform, it ight work quite well, however I already treid and checked the bullet interface with c++ and failed probably due my bad c++ skills. However it’s far harder than opengl stuff since bullet has no useable c interface and a c++ interface requires more intelligent wrappers.

since you need to have some kind of java to native object conversion.

I have started a basic native version of the jme2 jbullet-jme version already: http://code.google.com/p/jbullet-jme/source/browse/#svn%2Fbranches%2Fbullet-jme-work

Lots of stuff missing but the TestSimplePhysics test should work, maybe it can help you.



Wow nice :slight_smile:

I will definitly take a look at this, since I can’t use the default jme3-jbullet implementation. Due to my need of having a double precision simulation wich standart one just can’t supply

Is it really working good having such a big physics space? I found that by reducing the physics space size I was getting way better performance, maybe you should think about partitioning your world space and simply use multiple physics spaces?

It works great actually when using the dbvt space . around 3k objects flying around without problems.

(I run it on a 64 bit vm tho, on a 32 performance is around 10-20% lower)

I decided against using multiple physic spaces since its a multiplayer game and I would have inconsistences at the borders then.

EmpirePhoenix said:
I decided against using multiple physic spaces since its a multiplayer game and I would have inconsistences at the borders then.

I understand, I'd simply have an overlap area where theres actually two physics objects, one in each physics space, but if it works well for you right now.. :)

Thought about this as well, but imagine one player being in the overlapping zone,

and two pushing him from both sides(but outside overlapping area) , how would i correctly be able to synchronisize the physic spaces?

Altough the native bullet has no problems with double and large spaces as well, I think the performance issues you had where with the Sweep broadphases since he yneed to create a large array grid and sort it)

Huh? Whats the problem? Ofc the boundary has to be wider than the ships size and you have to sync the data back and forth between the two objects (like impacts etc).

Yes and there is the problem, I have objects with up to 50km radius ^^

Well doesnt matter as long as the double works fine for me :wink:

You are a bunch of thread hijackers…

Aye, why doesnt BuddyPress support split thread? ^^

normen said:
Aye, why doesnt BuddyPress support split thread? ^^

The buddies don't want to be split apart

The case here is I want multi-pass in a shader which “previous pass’s result texture” is “source texture for the next pass”.

For example a “outline-glow shader” which can be used in highlight selected object.

Like this:


For awhile , what i messed up with the code with “try & fail method” to figure out how “addProcessor” work … I take a look in the code of “SimpleWaterProcessor” and take also “TestMonkeyHead” to exam.

I guess somehow I can do this through these steps:

  1. in the “TestMonkeyHead” Init:



  2. pass1,pass2… is PassProcessor extends SceneProcessor

  3. PassProcessor have a method “setTexture(mat,Texture2D)” which set the Texture2D to a mat!

    4.PassProcessor capture an image and draw on screen (via a Picture ) to check the result!

    And … about the Shader

    1. A smart (or lazy) way:

    Use only 1 material with a parameter named “passNumber” which indicated the which pass is current-

    and beside

    in the .vert shader and .frag shader , use "if (passNumber==1) " to change how the object look…

> But like in the image above, I realize that this way end up == failed 'cause between two processor's rendering ( i don't know what to call it) , the material is the same!!!

2. Use 3 or 4 material like :
Pass1 . Render Monkey with "Mat1" in Viewport1 (Preview) -> Texture1 (depth)
Pass2. Render Monkey with "Mat2" in Viewport2 (Preview) -> Texture2 (edge-find)

If you guy can understand what I try to say :D , please point me some direction ...

Question :
1) Is that true material don't change between two processor's rendering?
2) Monkey can have 3-4 material to change and render to texture???

You may want to take a look at my code . It’s very alike with how WaterProccesor work but … :D:


[java]MaterialDef Outline {

MaterialParameters {

Color OutlineColor

Int PassNumber

Texture2D DepthTexture

Texture2D EdgeTexture

Float m_time


Technique {

VertexShader GLSL100: Outline.vert

FragmentShader GLSL100: Outline.frag

Defines {

PASS : PassNumber


WorldParameters {







[java]uniform int m_PassNumber;

varying vec2 Texcoord;

varying vec4 pos;

uniform sampler2D DepthTexture;

uniform sampler2D EdgeTexture;

void main(){

if (m_PassNumber==1){

vec4 newpos = pos / pos.w;

float depth = (pos.z - 1.0) / 999.0;

gl_FragColor = vec4(depth,depth,depth,depth);

} else if (m_PassNumber==2){

gl_FragColor = vec4(1,1,0.5,1);

} else {

gl_FragColor = vec4(1,0.5,0.5,1);




[java]public class PassProcessor implements SceneProcessor {

protected RenderManager rm;

protected ViewPort vp;

protected Spatial passScene;

protected ViewPort passView;

protected FrameBuffer passBuffer;

protected Camera passCam;

protected Texture2D resultTexture;

protected Texture2D inTexture;

protected int renderWidth = 512;

protected int renderHeight = 512;

protected float speed = 0.05f;

protected Vector3f targetLocation = new Vector3f();

protected AssetManager manager;

protected Material material;

protected boolean debug = false;

private Picture dispPic;

private String passName;

private int passNumber;

public PassProcessor(AssetManager manager,Spatial passScene,Material mat,String passName,int passNumber) {

this.manager = manager;


//material = new Material(manager, “Outline.j3md”);





public void initialize(RenderManager rm, ViewPort vp) {

this.rm = rm;

this.vp = vp;





if (debug) {

dispPic = new Picture(passName+“Texture”);

dispPic.setTexture(manager, resultTexture, false);



public void reshape(ViewPort vp, int w, int h) {


float time = 0;

public void preFrame(float tpf) {

time = time + (tpf * speed);

if (time > 1f) {

time = 0;


material.setFloat(“m_time”, time);


protected void destroyViews() {


public boolean isInitialized() {

return rm != null;


public void postQueue(RenderQueue rq) {

Camera sceneCam = rm.getCurrentCamera();

//update refraction cam










public void cleanup() {


public void postFrame(FrameBuffer out) {

if (debug) {

if (passNumber==1) displayMap(rm.getRenderer(), dispPic, 64);

if (passNumber==2) displayMap(rm.getRenderer(), dispPic, 256);

if (passNumber=:3) displayMap(rm.getRenderer(), dispPic, 448);



public boolean isDebug() {

return debug;


public void setDebug(boolean debug) {

this.debug = debug;


//debug only : displays maps

protected void displayMap(Renderer r, Picture pic, int left) {

Camera cam = vp.getCamera();

rm.setCamera(cam, true);

int h = cam.getHeight();

pic.setPosition(left, h / 20f);





rm.setCamera(cam, false);


protected void loadTextures(AssetManager manager) {

//inTexture = (Texture2D) manager.loadTexture(“Common/MatDefs/Water/Textures/gradient_map.jpg”);



protected void createTextures() {

resultTexture = new Texture2D(renderWidth, renderHeight, Format.RGBA8);


public Texture2D getResultTexture(){

return resultTexture;


protected void applyTextures(Material mat) {

mat.setTexture(passName+“Texture”, resultTexture);


protected void createPreViews() {

passCam = new Camera(renderWidth, renderHeight);

// create a pre-view. a view that is rendered before the main view

passView = rm.createPreView(passName+“View”, passCam);



// create offscreen framebuffer

passBuffer = new FrameBuffer(renderWidth, renderHeight, 1);

//setup framebuffer to use texture



//set viewport to render to offscreen framebuffer


// attach the scene to the viewport to be rendered

material.setInt(“PassNumber”, passNumber);


rm.renderViewPort(passView, 1);


public void setPassScene(Spatial spat) {

passScene = spat;