Jme-alloc project

Hello fellows, following the talk on ReflectionAllocator is broken on JDK 16 · Issue #1674 · jMonkeyEngine/jmonkeyengine · GitHub, i am currently preparing a test-case project as a template for jme-alloc, i have chosen to model the project in a modular approach, a jvm-alloc module and a native-alloc side, so we can do everything in a step-by-step fashion, i have added a WIP todo-list in the readme to track the changes and here is generally what i will do:

  1. Prepare the build environment (gradle).
  2. Prepare CI/CD to build for different variants.
  3. Packaging the shared object files within the jars.

What’s done ?

  1. Separate native and jvm modules are ready,
  2. Generating java headers to the native module ./include/. is ready.

Next copying native output and java byte classes to a single build directory and producing an active jar file.

EDIT:
The sequence of native compilation, packaging and linking will be:

Compile java code and generate jni headers → Copy [*.class] files inside their packages from the build folder to a shared output folder → Compile native code into shared object file → Copy [libjmealloc.so] files inside their variant packages from the build folder to the shared output folder → Package the output folder into a portable jar file → Extraction by java code and linking on runtime

4 Likes

Thanks for working on this project.

Note that you do not need to create a new BufferUtils calss, JME already has it. You just need to create an implementation for BufferAllocator interface.

2 Likes

It’s my pleasure, i have chosen to learn new things with jme, i need to try this.

I know, thanks for pointing out, however this will force me to include jme-core as a dependency and i don’t want to do this, it’s better to keep it a separate project and then including it on jme will provide jme-core with additional raw buffer functionalities to implement the new buffer allocator, it is way much cleaner i think.

Long story short, i will be building a minimalistic api.

We can then add the published jar and use it inside jme as follows:

package com.jme3.util;

import com.jme3.alloc.NativeBufferUtils;

public final class DesktopBufferAllocator implements BufferAllocator {

   /**
     * De-allocate a direct buffer.
     *
     * @param toBeDestroyed the buffer to de-allocate (not null)
     */
    void destroyDirectBuffer(Buffer toBeDestroyed) {
          NativeBufferUtils.destroy(toBeDestroyed);
     }

    /**
     * Allocate a direct ByteBuffer of the specified size.
     *
     * @param size in bytes (≥0)
     * @return a new direct buffer
     */
    ByteBuffer allocate(int size) {
          return NativeBufferUtils.allocate(size);
    }
}
1 Like

Note, i haven’t started the project yet, since about 90% of it will be building for different variants, so that why i am now aligned to the building model, selecting a good clean building model from the start will keep this project maintained forever, so class and package names might not be the best for now…however they are easily refactorable later since this won’t be a user code.

Note2: I am not going to use cpp, i will use plain c, c++ complicates things with no need.

Yeah, that is nice.

My original thought was when jme3-alloc.jar is included in the classpath it should be automatically detected and used by BufferAllocatorFactory this way we could make it switchable.

But I like your approach too, we can add a dependency to jme3-alloc inside jme3-lwjgl module and implement the LwjglBufferAllocator there (like the jme3-lwjgl3 did).

1 Like

Off-topic

By the way just wanted to mention that XX:MaxDirectMemorySize settings will be ignored when using a native buffer allocator. So an
OutOfMemoryError will never happen if it exceeds the MaxDirectMemorySize.

1 Like

It makes sense, since the native buffer allocates memory on the native heap, but it must fire a jvm crash with stack logs that reflects out of resources state, i have collected a couple of jvm crash logs from my library Serial4j throughout the development phase to examine them later, however i cannot remember currently what is exactly the crash logs for this, but it will be something similar to native segmentation error or may be a native heap out of memory.

1 Like

A PR to package the natives with the output jar:

Next; try to bound this to github-actions and get other system variants.

EDIT:
This is the toolchains support for the c++ gradle plugin, by default c++ gradle plugin autodetects the available toolchain, but some pre-requisites may need to made in each system image runner on the github-actions, here is the documentation:
https://docs.gradle.org/current/userguide/building_cpp_projects.html#sec:cpp_supported_tool_chain

EDIT2:

  • MacOS12 runner seems to have everything pre-installed.
  • I tested all github linux runners before on another project and they have everything pre-installed (GCC, gradle, and java).
  • Windows-server-2022 seems also to be complete and it has the bourne-again shell (BASH) which is great news for me.

EDIT3:
And this will be utilized between the compile jobs and the final assemble job, so basically gradle c++ plugin will compile a variant and upload it as artifact and there will be a matrix of jobs, each will point at a variant, the final assemble job will download all the artifacts and package them into a jar using the Jar gradle task.
[ Compile java (jdk-8) → upload byte code and native headers] → [ Download native-headers → Compile-variants job → upload libjmealloc.so artifacts ] → [ Download byte-code and libjmealloc.so artifacts → Assemble-job → upload assemble artifact] → [onRelease: Download assemble artifact → upload to maven ]

Hello again,

So far, i managed to create a jar file with MacOS_x86-64, Linux_x86-64 and Windows_x86-64 binaries using github image runners, here is the tree:

And here is the release test:
https://github.com/Software-Hardware-Codesign/jme-alloc/suites/10376499604/artifacts/511951949

One thing remaining is the other architectures of the different variants which i think github runners don’t support !

Any clues regarding this ?? I thought github runners provide x86 and arm variants…

EDIT:
There is a plugin here, but i haven’t tried yet:

Those are the supported variants:

Do we need additional variants ?

Afaik lwjgl2 only supports Windows, Linux, and Mac on x86, and x64. So I assume having these should be enough for now.

I am not sure i can support x86 for now, however i can support a workaround for intel instruction set linux_x86 on x64 systems by downloading the x86 toolchains and compiling the code against it or using x64 GCC options to compile for lower instruction sets (if ever exist), but i am not sure with other systems, i don’t develop natives on mac and windows.

EDIT:
github hosted runners donot support x86 out of the box, only x86_64.

I thought you said you managed to build them?

1 Like

Ops, sorry i mean x86_64 which is x64 :sweat_smile:, my fault, also check the artifact jar, you will find only the x86-64 folder.

1 Like

Does lwjgl-2 support x86 systems though ? I need to be sure; because starting from lwjgl-3.2.3 x86 on mac and linux support with arm support (i got these info from lwjgl customize website).

EDIT:
Alright, i checked the repo:

I will look into Travis CI, but i am not sure, i think the system is paid with a free trial.

For now, we can continue with x64 only.

1 Like

I will try to apply my workaround :wink:, but it may involve some kind of custom script or bash scripting, i am already into it, knowing that windows also has bash installed is a good news, i will see what i can do, at least provide a linux-x86 image.

Good news, there is a gcc compiler option to force 32-bit binary output.

I will add a compileX86 task on the native-alloc/build.gradle to compile a x86 image using the x64 GCC.

1 Like

by the way, the name “jvm-alloc” and “native-alloc” makes it sounds like they are two different implementations of the “alloc” thing.

Maybe better to name them jme3-alloc and jme3-alloc-natives. Like the jme3-android and jme3-android-natives

What do you think?

1 Like

Yeah, i will consider that, thanks for pointing this out, however the root project name is jme-alloc so i am not sure, i may use other names like lib for the jvm code and native for the native code may be, let me know what do you think ?

If we are going to merge this into the main project, then we need to follow the naming conventions you stated from the start…we may consider the conventional module names in case this project is merged in future.

Note: this project is separated into 2 modules with no common gradle files so this won’t affect the engine if its merged later on.