On Framegraph in the future of JME

I’m not sure if this should be discussed as a separate topic, so I’m creating this thread separately. Since there is some contention around that PR (see the discussion at the end of the PR), I’m temporarily adjusting it to “Draft” status until we finalize the public API for the FG system in JME3, at which point I’ll re-append the relevant code and put it back into “Ready for Review” at the appropriate time.

(FG is usually more important for the engine, and not that important for gameplay (not everyone will have the need to write a “custom pipeline”), however, I believe that when a new technology enters JME3, everyone should be made aware, so I created this post here.)

FrameGraph is used to optimize the entire rendering pipeline per-frame. FrameGraph aims to decouple the various rendering features of the engine from the upper layer rendering logic and lower level resources (shaders, render contexts, graphics APIs, etc.) to enable further decoupling and optimizations, with multi-threaded and parallel rendering being the most important.
Below is the Frostbite engine architecture diagram from GDC 2017 showing this:


Since JME3 only has OpenGL, there are no actual GFX APIs (RHI APIs), so the conceptual mapping to JME3 in the future with some adjustments is as follows:

You may wonder why MaterialSystem and ShaderSystem have mutual dependencies with FG. Actually they don’t need to depend on each other, but consider the following scenario:
We want to develop a Feature that runs compute shaders and generates result data (ShaderResourceView, SRV), then Material->SetSRV(). But GameObj->Material doesn’t know about or have that SRV initially, so GameObj->Material can initially register dependent resources (SRV) with FG, then later when FG execution reaches a certain stage, it can dispatch the result data (SRV) to the corresponding GameObj->Material to use.
So there can be mutual dependencies between MaterialSystem, ShaderSystem, and FG to enable flexible resource management in scenarios like this.
The traditional approach is after FG finishes a frame, in the next frame findSRV(TargetSRVName) from FG, and manually iterate Scene->Node->Material->setSRV(). But this requires writing ugly loops in code, and it shouldn’t be related to business logic. Also, more often we may want to define the SRV in the MaterialDef, for example:
MaterialDef{
SRV0 = TargetSRV;

}
So with the mutual dependencies, FG can directly dispatch resources like SRV to the right Material instance that depends on it based on the MaterialDef, without requiring manual dispatching code.

Since FG is implemented slightly differently in different engines, I will briefly describe the FG in commercial engines I have used before (Unreal Engine and Unity).

UnrealEngine RDG
In UE, FG is called RDG (Render Dependency Graph), and below is its basic structure:

// Instantiate the resources we need for our pass
FShaderParameterStruct* PassParameters = GraphBuilder.AllocParameters<FShaderParameterStruct>();

// Fill in the pass parameters
PassParameters->MyParameter = GraphBuilder.CreateSomeResource(MyResourceDescription, TEXT("MyResourceName"));

// Define pass and add it to the RDG builder 
GraphBuilder.AddPass(
			RDG_EVENT_NAME("MyRDGPassName"),
			PassParameters,
			ERDGPassFlags::Raster,
			[PassParameters, OtherDataToCapture](FRHICommandList& RHICmdList)
		{
			// … pass logic here, render something! ...
		}


As you can see, the standard UE RDG is created, used and destroyed on the stack every frame. Below is the logic of the main frame in the MobileRenderer in UE:

void FMobileSceneRenderer::Render(FRHICommandListImmediate& RHICmdList){
...
    if (bDeferredShading)
	{
		...
		FRDGBuilder GraphBuilder(RHICmdList);
		ComputeLightGrid(GraphBuilder, bCullLightsToGrid, SortedLightSet);
		GraphBuilder.Execute();
	}

...

       if (bShouldRenderVelocities)
	{
		FRDGBuilder GraphBuilder(RHICmdList);

		FRDGTextureMSAA SceneDepthTexture = RegisterExternalTextureMSAA(GraphBuilder, SceneContext.SceneDepthZ);
		FRDGTextureRef VelocityTexture = TryRegisterExternalTexture(GraphBuilder, SceneContext.SceneVelocity);

		if (VelocityTexture != nullptr)
		{
			AddClearRenderTargetPass(GraphBuilder, VelocityTexture);
		}

		// Render the velocities of movable objects
		AddSetCurrentStatPass(GraphBuilder, GET_STATID(STAT_CLMM_Velocity));
		RenderVelocities(GraphBuilder, SceneDepthTexture.Resolve, VelocityTexture, FSceneTextureShaderParameters(), EVelocityPass::Opaque, false);
		AddSetCurrentStatPass(GraphBuilder, GET_STATID(STAT_CLMM_AfterVelocity));

		AddSetCurrentStatPass(GraphBuilder, GET_STATID(STAT_CLMM_TranslucentVelocity));
		RenderVelocities(GraphBuilder, SceneDepthTexture.Resolve, VelocityTexture, GetSceneTextureShaderParameters(CreateMobileSceneTextureUniformBuffer(GraphBuilder, EMobileSceneTextureSetupMode::SceneColor)), EVelocityPass::Translucent, false);

		GraphBuilder.Execute();
	}
....
    FRDGBuilder GraphBuilder(RHICmdList);
    {
         for (int32 ViewIndex = 0; ViewIndex < Views.Num(); ViewIndex++)
				{
					RDG_EVENT_SCOPE_CONDITIONAL(GraphBuilder, Views.Num() > 1, "View%d", ViewIndex);
					PostProcessingInputs.SceneTextures = MobileSceneTexturesPerView[ViewIndex];
					AddMobilePostProcessingPasses(GraphBuilder, Views[ViewIndex], PostProcessingInputs, NumMSAASamples > 1);
				}
    }
GraphBuilder.Execute();
...
...
}

You can see that for UE, you can create Passes on the stack via multiple different FGs. You may wonder when execution happens? For UE, one purpose of using RDG is to utilize multi-threading, as follows:
image

In the diagram without RDG, rendering features are written on a single timeline where both setup and execution are done in place. RHI commands are recorded and submitted directly in line with pipeline branching and resource allocation.

With RGD, the setup code is separated from execution through user-supplied pass execution lambdas. RDG performs an additional compilation step prior to invoking the pass execution lambdas, and execution is performed across several threads, calling into the lambdas to record render commands into RHI command lists.

As you can see, FG utilizes multi-threading to compile resources (SRVs) in parallel. One advantage of modern graphics APIs is parallel resource handling. For the OpenGL RHI, UE does SRV compilation serially internally but can parallelize it by launching multiple GL contexts.

After that, FG executes the user’s lambda expressions, submitting commands to the RHIThread (note FG runs on the RenderThread). The RHIThread will try to execute more graphics commands in the next frame or when idle.

UE’s RDG system is deeply integrated into many modules, not just rendering. Below is one common usage:


class Actor : UActorComponent{

public:
void Tick(){
   {
       // compute task
       FRDGBuilder GraphBuilder(RHICmdList);
       FMyCustomSimpleComputeShaderTask* PassParameters = GraphBuilder.AllocParameters<FMyCustomSimpleComputeShaderTask>();

	   PassParameters->RectMinMaxBuffer = RectMinMaxBuffer;
        ....

	   GraphBuilder.AddPass(
		RDG_EVENT_NAME("MyCustomSimpleCustomShaderTask"),
		PassParameters,
		ERDGPassFlags::Copy,
		[PassParameters, RectMinMaxToRenderSizeInBytes, RectMinMaxToRenderDataPtr](FRHICommandListImmediate& RHICmdList)
	{
		void* DestBVHQueryInfoPtr = RHILockVertexBuffer(PassParameters->RectMinMaxBuffer->GetRHIVertexBuffer(), 0, RectMinMaxToRenderSizeInBytes, RLM_WriteOnly);
		FPlatformMemory::Memcpy(DestBVHQueryInfoPtr, RectMinMaxToRenderDataPtr, RectMinMaxToRenderSizeInBytes);
		RHIUnlockVertexBuffer(PassParameters->RectMinMaxBuffer->GetRHIVertexBuffer());
	});

        GraphBuilder.Execute();
   }


   {
         FRDGBuilder GraphBuilder(RHICmdList);
         ...other FG Logic
         GraphBuilder.Execute();
   }
}

}

Two RDGs are executed and destroyed on the stack, while the actual content is submitted to the RenderThread for parallel setup/compilation/optimization/culling, and callbacks the user lambda to submit commands to the RHIThread for execution.
So RDGs provide a way to parallelize and optimize work across threads before final submission to the GPU via the RHIThread. By executing and destroying them on the stack each frame, it avoids persistence of state across frames.

For resources, they are not created and compiled every frame. RDG will automatically manage and release them. If you don’t want resources to be created every frame, you can lookup previously registered or created SRV resources via RegisterExternalXXX() and TryRegisterExternalXXX() for use across multiple frames, such as for GBuffer.

Unity3d RenderGraph
Next I will explain the RenderGraph (FG) in Unity3d. In Unity3d, since we use C#, by default it is the same as Java and does not allow object allocation on the stack (although C# can achieve stack object allocation through stackalloc). So Unity3d’s FG interface is not like UE where stack object allocation is used to create and destroy per-frame. Below is a sample custom Pipeline in Unity3d:

public class MyRenderPipeline : RenderPipeline
{
    RenderGraph m_RenderGraph;

    void InitializeRenderGraph()
    {
        m_RenderGraph = new RenderGraph(“MyRenderGraph”);
    }

    void CleanupRenderGraph()
    {
        m_RenderGraph.Cleanup();
          m_RenderGraph = null;
    }
}

The RenderGraph in Unity3d is relatively simple. It divides the resources for Passes into external and internal resources. External resources can be passed to the current Pass from other places (like another Pass or global resources), while internal resources can only be accessed by the current Pass.
After that, you can create a Pass by adding a lambda expression, or you can create a custom Pass by adding a static function, like below:

RenderGraph.AddRenderPass("Custom Pass", () => {
  // Pass logic
});

// Or

static void CustomPass(RenderGraphContext context) {
  // Pass logic 
}

RenderGraph.AddRenderPass<CustomPass>(); 

Consistent with Unreal Engine, most SRVs in Unity3d are created by the RenderGraph, as shown in the following functions:

RenderGraph.CreateTexture(desc);

RenderGraph.CreateSharedTexture(desc);

RenderGraph.CreateUniformBuffer(xxx);

RenderGraph.CreateCreateComputeBuffer(xxx);

Since Unity3d’s source code is not open sourced and only commercially licensed, I cannot provide internal details here. But it should be much simpler compared to Unreal Engine (which has pros and cons - it may not be as thoroughly optimized and powerful as Unreal Engine).

Regarding the RenderGraph in Unity3d, there is a simple description, please refer to:Benefits of the render graph system | Core RP Library | 10.2.2

In Unity3D, the RenderGraph is part of the Scriptable Render Pipeline (SRP) rendering pipeline.
The RenderGraph allows defining rendering passes and representing resource dependencies between passes within the SRP pipeline.
Some key points:

  • The RenderGraph is built on top of the SRP pipeline to define passes and resources.
  • Passes are added via RenderGraph.AddRenderPass.
  • Texture resources are created via APIs like RenderGraph.CreateTexture.
  • The RenderGraph handles scheduling and execution of passes.
  • Resources persist across frames by default, unlike UE which destroys them each frame.

So the Unity3D RenderGraph is built internally within the SRP pipeline to define pass workflow and resources, simplifying custom rendering pipeline development. It has some different design tradeoffs compared to UE’s RDG but solves the same core problems.

JME3 FG
JME3 FG, alright, there is not yet a complete JME3 FG at the moment. I only added a basic framework in this PR, and have not exposed public APIs to the application layer yet. So regarding this point, I can describe my views on the design of the JME3 FG system.

Since Java cannot allocate objects on the stack, although I can provide interfaces similar to lambda expressions for adding Passes, the actual efficiency is uncertain.
Additionally, due to the lack of fine-grained memory control in Java, regarding public API design, I lean towards having the FrameGraph as a member variable, and register and create required CustomPasses and Parameters at the beginning, then look up the registered or created Passes, Parameters each frame, reorganize and execute the FrameGraph per frame.
Below is the basic intended usage in the current design:

public class MyCustomPass extends FGRenderQueuePass{
    // todo:use FG SRVs(The best approach is for the FG (FrameGraph) to create and manage resources directly, but for now, we can only use the native methods to call GL to create them.
)
    private FGFrameBuffer prePassBuffer;
    private FGUAV prePassColor = null;
    private FGUAV prePassDepth = null;
    // use native SRVs
    private FrameBuffer prePassBuffer;
    private Texture2D prePassColor = null;
    private Texture2D prePassDepth = null;
    MyCustomPass(){
         // todo:Creating and registering resources through FG:
         prePassColor = FGBuilderTools.findOrAddSRV<FGUAV>("prePassColor", Desc);
         // Creating directly using native methods (not recommended) 
         prePassColor = new Texture2D(w, h, Image.Format.RGBA16F);
         ...

         // register SRVs(Indicates that this SRV can be exposed externally through this Pass)
         registerSource(new FGRenderTargetSource(srvName, prePassColor));
         ...

        // register Sinks(Similar to passParameters in UE, but UE allocates objects on the stack every frame, Java can't do this (not memory friendly), so passParameters are added this way instead.)
        registerSink(new FGTextureBindableSink<FGRenderTargetSource.RenderTargetSourceProxy>(sinkName, sinkData);
    }

    @Override
    public void prepare(FGRenderContext renderContext){
       //todo:At this stage, if using APIs like Vulkan or DX12, we could compile the SRVs (shader resource views) or other resources required by the passes in parallel.
    }

    @Override
    public void dispatchPassSetup(RenderQueue renderQueue) {
        // todo:At this stage, determine which objects from the RenderQueue can enter this pass and generate MeshDrawCommands for them.
    }

   @Override
    public void execute(FGRenderContext renderContext) {
       // todo:At this stage, the commands could be submitted to another thread (e.g. the RHIThread), then execute multi-threaded command drawing. However, currently JME3 only has single-threaded OpenGL rendering.
    }
}


public class TestMyRenderPipeline extent SimpleApplication{
    FrameGraph fg;
    MyCustomPass myCustomPass;
    public TestMyRenderPipeline(){
         fg = new FrameGraph(FGRenderContext);
         myCustomPass = new MyCustomPass("MyFirstCustomPass");
         // or ↓
         myCustomPass = FGBuilderTools.findOrAdd<MyCustomPass>("MyFirstCustomPass");
    }
    
    public void simpleUpdate(float tpf){
         // Referencing ue, unity3d, FG clears and rebuilds every frame. For Java, we can't allocate objects on the stack, so we implement it through member variables and inherit FGRenderQueuePass to implement a custom Pass, rather than implementing through creating Lambda expression objects every frame (I'm not sure what the actual overhead of this approach is for Java).

         fg.reset();
         if(usePrePass){
            // sinkData is an SRV that can be obtained from other Passes, or from global resources (FGBuilderTools.FindOrAdd)
            myCustomPass.setSinkLinkage(sinkName, sinkData);
            fg.addPass(myCustomPass);
         }
        ... Adding other passes

        fg.finalize();
        fg.execute();

    }
}

You can see FG in JME3 contains FGSink and FGSource, essentially they are PassParamters and PassOutputs in UE, Unity3d. C++ allows stack allocation so in UE you can directly create PassParamters and PassOutputs in local scope each frame, but for Java this may cause performance issues. So I split them into class fashion (FGSink and FGSource), then you can inherit and add custom FGSink and FGSource to implement custom PassParamters and PassOutputs. More on how to use them will not be elaborated here for now.

It’s worth noting that here we’re only discussing the public API of FG in JME3. The internal implementation details I may further reference UnrealEngine’s encapsulation. For this, we need some understanding of this new thing (so we know what modules in future JME3 need adjustment for custom pipeline and FG system - adjust MaterialSystem to define SRVs in MaterialDef, adjust ShaderSystem to associate it with FG, initiate SRV compilation, shader compilation through FG, provide unified interface for subsequent parallel compilation processing (although OpenGL doesn’t support parallel compilation, may need to create multiple glContexts, but from a FG design perspective, it should be oriented for multi-threaded architecture)).

I will finalize the FG public API in this post. Once the public API is finalized, I may implement these public APIs later and submit them to this PR for review again. If you have any better suggestions, feel free to provide feedback. :grinning:

11 Likes

It’s worth mentioning that I’m not sure if the FG public API I implemented in JME3 conforms to the philosophy of Java programmers or current JME, so I didn’t provide the API for allocating objects on the stack like in ue or unity3d. If you have better suggestions, I will collect them here and refer to them to adjust the public API. :wave:

6 Likes

Thank you for the summary.

         prePassColor = FGBuilderTools.findOrAddSRV<FGUAV>("prePassColor", Desc);
         // Creating directly using native methods (not recommended) 
         prePassColor = new Texture2D(w, h, Image.Format.RGBA16F);

if possible, in the final api i would suggest that there is only one way to create the resources. (If the result of the two is the same). It will make sure that all tutorials and forum posts use the same api, and it would limit the follow up question: “What is the difference between these two” or “The other posts does not use findOrAdd”

         registerSource(new FGRenderTargetSource(srvName, prePassColor));
        registerSink(new FGTextureBindableSink<FGRenderTargetSource.RenderTargetSourceProxy>(sinkName, sinkData);

Can you explain a bit more on those two? i think the first one is quite clear, (is it separated from the getOfFind to allow internal use only resources?)

In the registerSink, what data is contained in the sinkData, and does sinkName relate to srvName?

A secondary pass would get access to the resources registered, or only on those with registered Sink?

How does the setSinkLinkage work? Do i explicitly have to target source and destination? for the SSAO normal’s for example the source could be gBuffer normals, or a separate pre pass, or an extended forward pass that writes normals to another render target.

In the final solution, would you still use the FilterPostProcessor, or embed various post processing passes into the framegraph?

fg.finalize();

finalize is a nogo in java because it is an internal method. You need another name for that. I have seen .compile() being used.

1 Like

Yes, for this issue, I listed two methods,
Create and use resources in Native ways: Texture2D t = new Texture2D();
Create and use resources in FG ways: FGSRVTexture2D t = FGBuilderTool.findOrAdd<>();
In other engines, it is recommended to create and use resources through FG interfaces instead of native APIs. Of course, there is still one way:
Texture2D t = new Texture2D();
FGSRVTexture2D tSRV = FGBuilderTool.tryGetProxySRV<>(t);
That is, submit native resources to the FG system for proxy or management. The benefit of this is that other places can query the corresponding FG resources through the FG system.
So I can either just provide a set of FG tryGetProxyXXX interfaces to proxy or manage native API resources, or not allow users to create and use resources in native ways when using FG.

For UE (C++), a Pass needs an FParameters, its code is roughly as follows:

FRDGBuilder GraphBuilder(RHICmdList);
auto* SSAOPassParameters = GraphBuilder.AllocParameters<FCustomSSAOPass::FParameters>(); 
SSAOPassParameters->normalMap = GraphBuilder.TryGetExternalTexture(srvName);
...other
GraphBuilder.AddPass("MyCustomPass", SSAOPassParameters,lambda=>{});
...other

These codes create and destroy on stack every frame, but Java does not allow creating and actively destroying objects on stack, so I turned it into class member variables:

class CustomSSAOPass extent FGRenderQueuePass{
public CustomSSAOPass(){
    super("MySSAOPass");
    // Add CustomSSAOPass required Paramaters 
    registerSink(new FGSink1());
    registerSink(new FGSink2());
}
}

image
You can understand it as: Sink is similar to InputParam in ShaderNode, FGPass is similar to a ShaderNode…
An FGSink can only be bound with one FGSource. An FGSink can be a TextureParam, IntParam, RenderStateParam, FBParam, FBBlitParam, FBCopyParam, OtherUniformParam…, an FGSource essentially provides how to bind with an FGSink.
For example, FGSSAOPass needs a DepthSink, which can be obtained from GBufferDepthSource or from PrePassDepthSource.
Currently, you only need to override the following interfaces and override the bind() function to implement the operation of binding with FGSource:

public abstract void bind(FGSource fgSource);
    public abstract void postLinkValidate();

    public boolean isLinkValidate() {
        return bLinkValidate;
    }
    public FGBindable getBind(){return null;}

For ease of use, I will add or improve a set of common FGSink subclasses and FGSource subclasses later, such as FGTextureBindableSink, FGVarBindableSink, etc.
FGSink actually does not need to contain any sinkData. As long as your custom FGSink and custom FGSource complete data binding through the bind() function, it is fine. The example I provided is just code reference for the built-in FGTextureBindableSink class…
FGPass needs a set of FParamaters. In JME, FParamaters exist in the form of member variable Sinks with srvName/paramName as the Name for each Param (i.e. each Sink).
Assume you have a CustomSSAOPass and added three Params for CustomSSAOPass:

registerSink(new Sink1(srvName1/paramName1));
registerSink(new Sink2(srvName2/paramName2));
registerSink(new Sink3(srvName3/paramName3));

When organizing the FrameGraph every frame, you need to set the FGSource that param1/param2/param3 need to bind through customSsaoPass.setSinkLinkage() by paramName (which is also sinkName, which is srvName in the code example above). Note that, similar to Material, you do not have to bind FGSource for all FGSinks, allowing empty binding.
APass.setSinkLinkage()'s first parameter is registeredName (i.e. sinkRegisterName, i.e. sinkName, i.e. paramName), the second parameter is String target, target = PassName + “.” + PassFGSourceName (I know this code is very crude, maybe I should change to a better public API), this will bind a targetFGSource from targetPass for APass’s sinkName1.
Each FGPass can also provide a set of FGSources externally for other FGPasses to bind as FGSinks (note, just for association, the actual callback bind is executed later).
It is worth mentioning that when target = “$” + “.” + FGSourceName, the FGSourceName will be looked up in the global resources, so you can bind Params in this way:

MyPass.setSinkLinkage(sinkName/paramName, FGBuilderTool.tryGetSRV<>("XXXName"));

I may make FG as a member variable of SceneProcessor and execute my own FG internally in SceneProcessor to replace the original process. In fact, JME has some legacy code, if you check it, you will find:


image
I may transfer these codes to FG.

Alright, I’m not sure which naming finalize or compile is better, but compile seems to fit the meaning of the FG compilation stage more, so I will adjust the naming here.

An APass provides its FGSource (which is the Output of that Pass) externally through registerSource, so that another BPass can bind a certain FGSource of APass to a certain FGSink of BPass through the setSinkLinkage function. However, you can register resources directly to FGBuilderTool, so that you can find globally anywhere.

3 Likes

Please note that in the FG concept, creating SRV resources through FGBuilderTool.FindOrAdd<>(); does not create immediately, but rather when FG executes finalize() or compile(), Passes are pruned, resources filtered, and resources are compiled or created in parallel on first use (but currently my FG implementation does not handle this part, because JME currently does not have a multithreaded rendering architecture, so for the public API I will design according to the FG concept, but the internal implementation details may not completely follow the FG concept, unless the rendering architecture is adjusted to multithreaded mode in the future…)

Finalize is not an option because it is used? to be called by the java garbage collector.

I think i understood the basic concept. I never needed such a verbose/complex binding struct, so it is your call if it is needed or not. As i have never used the api, the first thing that came into my mind is that if i have to wire the things together manually, i could also call a getter/setter on some resource HashMap

That is mostly because i stay in opengl space. I know the requirements for modern dx/vulkan are quite different

1 Like

True, and i wish JME would have better backend or multiple backends :slight_smile:

I noticed there you got “DeferredRenderer” and “MobileRenderer”.

Not sure why selected this Features exactly. Also not sure why they are “Renderers” but it might be just naming issue for me.

Personally i would name it “Pass” or something like “MobileRenderPass” in this scenario, or maybe im wrong? Could you provide more examples of Features for JME here?

In general based on Frostbite document first they had:
image

Where were no framegraph, tho still even here they had multiple backends ofc

image

So later they added FG and Transient Resources. Still features were the same, they just switched ro Higher api FG for better development and cleanless of Rendering passes.

image

While JME Graph is very similar here, like i said before im very confused with “Renderer” naming, since i thought it will be “passes” and no more Renderers, just one.

So with the mutual dependencies, FG can directly dispatch resources like SRV to the right Material instance that depends on it based on the MaterialDef, without requiring manual dispatching code.

Yes, there were compute shader people arround here, they might be happy with this as example.

In Unity3D, the RenderGraph is part of the Scriptable Render Pipeline (SRP) rendering pipeline.
The RenderGraph allows defining rendering passes and representing resource dependencies between passes within the SRP pipeline.
Some key points:

  • The RenderGraph is built on top of the SRP pipeline to define passes and resources.
  • Passes are added via RenderGraph.AddRenderPass.
  • Texture resources are created via APIs like RenderGraph.CreateTexture.
  • The RenderGraph handles scheduling and execution of passes.
  • Resources persist across frames by default, unlike UE which destroys them each frame.

This really do sound like When JME FG would connect with Ricc Pipeline work where you could share same Textures and render results from each one.

But just to say, from user perspective Unity one sounds much more easy to use, tho i understand Unreal one is more efficient.

Additionally, due to the lack of fine-grained memory control in Java, regarding public API design, I lean towards having the FrameGraph as a member variable, and register and create required CustomPasses and Parameters at the beginning, then look up the registered or created Passes, Parameters each frame, reorganize and execute the FrameGraph per frame.
Below is the basic intended usage in the current design:

Yes this sounds very logic for me.

You can see FG in JME3 contains FGSink and FGSource, essentially they are PassParamters and PassOutputs in UE, Unity3d. C++ allows stack allocation so in UE you can directly create PassParamters and PassOutputs in local scope each frame, but for Java this may cause performance issues. So I split them into class fashion (FGSink and FGSource), then you can inherit and add custom FGSink and FGSource to implement custom PassParamters and PassOutputs. More on how to use them will not be elaborated here for now.

Ofc need to be cautious to not overuse Garbage Collector. Tho if user will just define PassParam and PassOutput that can update themselfs, not like “remove” and “re-add” then it should be fine.

Below code looks very readable for me, compared to Unity one. Ofc this Finalize should be “compile” like mentioned.

its not about naming here. finalize will be used by JVM unintended way for us here.

Just use different name like compile :wink:

I understand that not many people use it really, because not many reasons usually, but still its just a key-method-keyword in java. something like toString or readResolve or etc.

Also Frostbite have Setup → Compile → Execute so we could go same way.

It’s worth mentioning that I’m not sure if the FG public API I implemented in JME3 conforms to the philosophy of Java programmers or current JME, so I didn’t provide the API for allocating objects on the stack like in ue or unity3d. If you have better suggestions, I will collect them here and refer to them to adjust the public API. :wave:

Thats reasonable. Its not like you can manage memory in Java, but in general this approach is fine.

if possible, in the final api i would suggest that there is only one way to create the resources. (If the result of the two is the same). It will make sure that all tutorials and forum posts use the same api, and it would limit the follow up question: “What is the difference between these two” or “The other posts does not use findOrAdd”

native APIs. Of course, there is still one way:
Texture2D t = new Texture2D();
FGSRVTexture2D tSRV = FGBuilderTool.tryGetProxySRV<>(t);
That is, submit native resources to the FG system for proxy or management. The benefit of this is that other places can query the corresponding FG resources through the FG system.
So I can either just provide a set of FG tryGetProxyXXX interfaces to proxy or manage native API resources, or not allow users to create and use resources in native ways when using FG.

Well while limiting is never good, in this case it might be to avoid confusion.
If there is benefit to use both we could stay with it tho. Hard topic.

2 Likes

Perhaps I can make FGSink and FGSource the low-level API (general users may not directly use them), while the public API uses the high-level API, like:

public class MyCustomRenderPass extent FGRenderQueuePass{
    
    public MyCustomRenderPass(String passName, boolean bUseDefineDefaultParams){
        super(passName);
        if(bUseDefineDefaultParams){
            defineDefaultParams();
        }
    }

    private final void defineDefaultParams(){
        // ↓ Use high-level API(This will uniformly describe parameters using FGParamDesc objects)
        // addParamDef will automatically create corresponding FGSink based on FGParamDesc internally
        addParamDef(paramName1, new FGParamDesc(xxx));
        addParamDef(paramName2, new FGParamDesc(xxx));
        ...
        // ↓ or Use low-level API(This defines sink by creating corresponding XXXSink subobject)
        // (I will provide a set of FGXXXSink types containing possible cases for FG, just like public enum VarType)
        registerSink(sinkName1, new FGRenderStateSink(XXX));
        registerSink(sinkName2, new FGTextureBindinableSink(XXX));
    }

    ...
}


public class MyApp extent SimpleApplication{
    FrameGraph fg;
    MyCustomRenderPass myCustomRenderPass1;
    MyCustomRenderPass myCustomRenderPass2;
    // ↓Native API resource
    Texture2D nativeTexture2D;
    UniformBuffer nativeUBObject;
    public MyApp(){
        fg = new FrameGraph(FGRenderContext);

        nativeTexture2D = new Texture2D(w, h, format);

        myCustomRenderPass1 = new MyCustomRenderPass("customPass1", true);

        // Define required params externally
        myCustomRenderPass2 = new MyCustomRenderPass("customPass1", false);
        myCustomRenderPass2.addParamDef("useVelocity", new FGParamDesc(xxx));
        myCustomRenderPass2.addParamDef("srvVelocityRT", new FGParamDesc(xxx));
    }

    private void simpleUpdate(float tpf){
        fg.reset();

        // other passes
        fg.addPass(...);

        {
            // Set the parameters needed for the pass by calling setParam (Note that this does not immediately set and create resources, but rather compiles/culls/final binds after fg.Compile) 

            // If you have a native resource, for example you have a native Texture2D, convert it to a FGSRV proxy through the FG interface
            myCustomRenderPass1.setParam(paramName1, FGBuilderTool.tryGetSRV<>(nativeTexture2D, XXX));
            // Look up existing destination SRV resource from FG system
            myCustomRenderPass1.setParam(paramName1, FGBuilderTool.tryGetSRV<>("TargetSRVName", XXX));
            // Look up existing destination SRV resource from specified Pass
            myCustomRenderPass1.setParam(paramName1, targetPass.getSRV<>("TargetSRVName", XXX));
            fg.addPass(myCustomRenderPass1);
        }

        {
            // Similarly as above
            myCustomRenderPass2.setParam(xxx, xxx);
            ...
            fg.addPass(myCustomRenderPass2);
        }

        // This step will execute pass culling, pruning, resource binding or creation (in parallel if RHI allows parallel processing)
        fg.compile();

        // This step will callback pass Logic
        fg.execute();
    }
}

The entire usage will be similar to Material.setParam and MaterialDef.addMaterialParam. This should conform most to the form of the public API?

1 Like

Perhaps this diagram can intuitively explain everything? Different platforms, such as mobile, desktop, may use a different set of Features, for example the Mobile platform may use a simple DeferredPipeline, and use SoftwareOcclusionCull for occlusion culling, while SimpleRenderer can be used by Mobile or Desktop platforms, and DeferredRenderer is used for Desktop platforms. Encapsulating these Features through different XXXRenderers should be more intuitive? Then RenderManager stays clean, just needing to use AbsRender->Render().Not sure if JME needs to divide so thoroughly, in fact JME currently uniformly uses GLRender->Render…

1 Like

Yes, the earliest engine design idea was to use SceneGraph, single thread etc. patterns. Later on, even with the introduction of RHI, such design was still kept. And then afterwards, due to the continuous update and iteration of graphics feature APIs, a new architecture was needed, namely the FG and multi-threaded rendering architecture. And component-based approaches also emerged to replace SceneGraph (because SceneGraph is unfriendly to parallel processing of primitive culling, SceneGraph must traverse the tree). Now, these have gradually become the main ideas of modern frameworks…

In fact Features include not only RenderPass, but also others, for example mobile platforms use SoftwareOcclusionCull for OcclusionCull, desktop platforms use HizOcclusionCull… These Features in UnrealEngine are encapsulated through different Renderers (MobileRenderer and DeferreRenderer (for desktop platforms)) , Renderer is just an object encapsulating code…
Perhaps we can use other names, such as MobileHandler…DesktopHandler…, MobileFeaturePipeline,DesktopFeaturePipeline… But personally I think “Renderer” is the most concise, because it contains a set of codes related to rendering, scene processing.

Sorry, I just realized “finalize” is a potential reserved word in java…so I will adjust it to “compile” name, to keep consistent naming with other engines’ FG.

My intention is that native resources and FG resources may need to convert between each other, for example an FGSRV resource needs to be given to a Material as a texture, and sometimes we may want to use it in other places in the native resource way, so I may provide interfaces like the following for mutual conversion:

// nativeTexture resources from other places...
Texture2D nativeTexture2D;

// Because once entered into the FG system, resources need to be uniformly represented in FG form.
myPass.setParam(paramName1, FGBuilderTool.tryGetSRV<>(nativeTexture2D, XXX));

// We may need to extract native resources from the FG system, to be used for Materials or other purposes.
mat.setTexture("velociteRT", FGBuilderTool.tryGetSRV<>("TargetSRVName", XXX).getSRVNativeObject());
2 Likes

@zzuegg @oxplay2
Thank you for taking the time to provide feedback, I will rethink these design ideas. Through discussion, we are now very close to the final JME3FG framework. Other supplements are welcome.

2 Likes

both addParamDef and registerSink looks like addMaterialParam()

While ofc user friendly is more the first one:

        addParamDef(paramName1, new FGParamDesc(xxx));
        addParamDef(paramName2, new FGParamDesc(xxx));

But friendly do not mean its best, because Sink ones at least protect from providing incorrect var type.

I would say more up to higher API one, if it do not have any additional downsides, while type protection would be anyway on setParam stage.

But its just an opinion.

Ah i see, so this Renderers were just like pre-features for next features grouped by them.
Here its about knowledge that you have but i dont, because when look at other Engines, they do not split this features here, but they still support rendering on mobile right? Like Unity one. Why they didnt split it same way?

Yes, the earliest engine design idea was to use SceneGraph, single thread etc. patterns. Later on, even with the introduction of RHI, such design was still kept. And then afterwards, due to the continuous update and iteration of graphics feature APIs, a new architecture was needed, namely the FG and multi-threaded rendering architecture. And component-based approaches also emerged to replace SceneGraph (because SceneGraph is unfriendly to parallel processing of primitive culling, SceneGraph must traverse the tree). Now, these have gradually become the main ideas of modern frameworks…

While SceneGraph is not multi-thread rendering friendly, it would be nice to maintain this just for other functionalities its used for like positioning/scaling/etc. Rendering functionalities would no need to be there in fact. I really do like SceneGraph so if later you might have time to make multiple-backends(the hardest work imo) would be nice to still have “some” of SceneGraph functionalities.

In fact Features include not only RenderPass, but also others, for example mobile platforms use SoftwareOcclusionCull for OcclusionCull, desktop platforms use HizOcclusionCull… These Features in UnrealEngine are encapsulated through different Renderers (MobileRenderer and DeferreRenderer (for desktop platforms)) , Renderer is just an object encapsulating code…
Perhaps we can use other names, such as MobileHandler…DesktopHandler…, MobileFeaturePipeline,DesktopFeaturePipeline… But personally I think “Renderer” is the most concise, because it contains a set of codes related to rendering, scene processing.

Yeh, sorry, it was like doubled, you already explained with Graph image before.

But yes, we could use different namings, because this Renderer Features are not like other features that can be used with multiple Renderers.

MobileHandler…DesktopHandler sounds much better indeed.

My intention is that native resources and FG resources may need to convert between each other, for example an FGSRV resource needs to be given to a Material as a texture, and sometimes we may want to use it in other places in the native resource way, so I may provide interfaces like the following for mutual conversion:

ok, i understand. Tho im not sure here about performance because every “conversion” might take a lot of time. Depending of how doing this ofc, but even fastest conversion might take some time.

So do this “conversion” is just about “pulling” Native Value or you would need read bytes/etc?

If its just about pulling Native value, then all is fine :slight_smile:

I will always try help you ad much as i can and provide feedback when need :slight_smile:

Tho would be good to wait also for other people to participate. There are some people i still wait to read your topics like for example Riccardo, but he have just lack of time currently.

I Belive @codex @yaRnMcDonuts might also be interested to participate this topic.

1 Like

I think this should be the standard public interface - it’s less boilerplate, and I’d think less error prone as well.

2 Likes

I think this is a way to encapsulate features code. In Unreal Engine there is MobileRenderer, DeferredRenderer (for desktop platforms), Godot has Rendering_method, e.g. rendering_method_mobile, rendering_method_web, rendering_method_default etc. Unity3D also has similar code, but I can’t provide related content here (because Unity3D requires commercial authorization for related code).

It’s just getting the native value, and FGBuilder.tryGetSRV<>(nativeValue) doesn’t create a new FGSrv object every time, there is internal caching, if there is a cached one it will return the previously cached FGSrv object.

Thank you! :grinning:

I’m also waiting for others to supplement relevant information.

1 Like

Yes, considering user friendliness, I will provide this set of public API, while retaining the low-level API (i.e. the sink and source scheme).

1 Like

What needs to happen next for this project to move forward?

8 Likes

I am currently doing partial code refactoring based on our previous discussions. This work is still in progress.

12 Likes

Thanks for the status update. Carry on!

2 Likes

This is really impressive. Looking forward to seeing this progress from the PR. I have been very torn between JME and Godot for many months now as I work on my server back-end. This would be an extremely positive improvement to JME rendering system. Definitely following and pulling for this!

7 Likes