Jme-alloc project

What about making the java one (jme3-alloc) the root project and “jme3-alloc-natives” a sub-project?

1 Like

I have tested this compiler flag locally on my machine here is the result binary file, looks promising :wink: :

┌─[pavl-machine@pavl-machine]─[~/Hello-World]
└──╼ $gcc test_arch.c -shared -o 'test-arch-x86_x64.so' && \
           gcc -m32 test_arch.c -shared -o 'test-arch-x86.so'
┌─[pavl-machine@pavl-machine]─[~/Hello-World]
└──╼ $file ./test
test_arch.c           test-arch-x86.so      test-arch-x86_x64.so
┌─[pavl-machine@pavl-machine]─[~/Hello-World]
└──╼ $file ./test-arch-x86.so
./test-arch-x86.so: ELF 32-bit LSB shared object, Intel 80386, version 1 (SYSV), dynamically linked, BuildID[sha1]=ef70a9cc4ea8ad43eb2f0ae474923e777ef8f815, not stripped
┌─[pavl-machine@pavl-machine]─[~/Hello-World]
└──╼ $file ./test-arch-x86_x64.so
./test-arch-x86_x64.so: ELF 64-bit LSB shared object, x86-64, version 1 (SYSV), dynamically linked, BuildID[sha1]=c6b59b4d3a14c884edf5c59307eb530be81aa2a4, not stripped

Looks better, i will consider that in the next PR, thank you !!

1 Like

They have the same architectures as lwjgl shared object files, so we are on the right route:

┌─[pavl-machine@pavl-machine]─[~/Downloads]
└──╼ $file ./liblwjgl.so
./liblwjgl.so: ELF 32-bit LSB shared object, Intel 80386, version 1 (SYSV), dynamically linked, BuildID[sha1]=9ef3fe71bdc1625ae5e589f417b770c1e50a2a6f, stripped
┌─[pavl-machine@pavl-machine]─[~/Downloads]
└──╼ $file ./liblwjgl64.so
./liblwjgl64.so: ELF 64-bit LSB shared object, x86-64, version 1 (SYSV), dynamically linked, BuildID[sha1]=e4059e4ac485e18ff50f0cccee81f1afbdb81133, stripped
1 Like

If you have anything useful, you can open issues here, i will convert them into tasks so i won’t forget:

2 Likes

I use TravisCI extensively. They grant free CI credits to open-source projects. They’re nice about it, but they only grant 25,000 credits at a time. I burn through 25,000 credits in about a month or so, at which time I have to send an e-mail to request more.

1 Like

Cool, i have found a good alternative, using the compiler option ‘-m32’ changes the target machine instruction set of the output binary for the same variant (intel and os), so my workaround will be to create a compileX86 task or use some gradle properties variables to enable this compile option, however if we needed additional architectures (rather than intel), i may consult you about Travis, thank you.

1 Like

Hello again,

I managed to compile the following binaries:

  • Linux-x86-64
  • Linux-x86
  • Windows-x86-64
  • Macos-x86-64

As for Windows-x86: The x86-64 system needs to install Mingw-w32 binaries from command-line, i have no idea how to do this currently :sweat_smile:, if you know please help !

As for Macos-x86: I cannot compile using xcode-10+ for x86 on x64 systems, it’s deprecated and i need xcode 9 to compile x86 Macos binaries, or an x86 Mac system.

I have chosen to create a custom build system using bash and executing the script in a custom gradle task class (and it works fine).

What i will do now ?
I will start developing the API, the native code until we find an answer to at least compiling windows-x86 binaries…

EDIT:
Btw, I have refactored the modules to jme3-alloc and jme3-alloc-native as suggested by @Ali_RS to be compatible with jme3 modules layout, as for build system, i failed to reach my goals using the new c++ gradle plugin, so I modified some old simple bash scripts from my projects to be able to compile cross platform binaries, one thing is missing is the incremental build (I will implement this soon if needed).

2 Likes

This is the file check result for both Mac and Windows x64 compiled using System-specific GCC:

┌─[pavl-machine@pavl-machine]─[~/Downloads/release-archive/jme3-alloc/libs/macos/x86-64]
└──╼ $file '/home/pavl-machine/Downloads/release-archive/jme3-alloc/libs/macos/x86-64/libjmealloc.dylb' 
/home/pavl-machine/Downloads/release-archive/jme3-alloc/libs/macos/x86-64/libjmealloc.dylb: Mach-O 64-bit x86_64 dynamically linked shared library, flags:<NOUNDEFS|DYLDLINK|TWOLEVEL|NO_REEXPORTED_DYLIBS>
┌─[pavl-machine@pavl-machine]─[~/Downloads/release-archive/jme3-alloc/libs/macos/x86-64]
└──╼ $file '/home/pavl-machine/Downloads/release-archive/jme3-alloc/libs/windows/x86-64/libjmealloc.dll' 
/home/pavl-machine/Downloads/release-archive/jme3-alloc/libs/windows/x86-64/libjmealloc.dll: PE32+ executable (DLL) (console) x86-64 (stripped to external PDB), for MS Windows

Does this even matter at this point? It’s not supported anymore and I don’t think a 32 bit OS even shows up on steam hardware stats anymore.

1 Like

Idk, i don’t use windows for development :sweat_smile: , you tell me if this is worth it, my build script supports all variants out of the box, but you should have the right binaries to compile for x86 systems if you run a x86-64 system…which is the case here, however, if someone needs it, they can clone and compile it manually through the same script.

@Ali_RS By the use of android ndk (the LLVM), i can build a version for all android architectures, and it’s very easy with bash, if you would like to migrate the whole allocation api to a single api let me know, we may include this update in jme-3.7 for instance later on.

EDIT:
Btw, this is a script sample of what can be done, however, we can do more and i am currently developing a low-level build script using bash (for bash lovers) it should have an incremental build later on:

1 Like

I am not sure, maybe you should separate platform native jars using a “classifier” as lwjgl did. So they can be picked selectively.

For example:

jme3-alloc:version:natives-linux
jme3-alloc:version:natives-windows
jme3-alloc:version:natives-mac
jme3-alloc:version:natives-android
1 Like

For Android I will do this, just a jar file with a lib folder (and no runtime extraction is needed), but other systems depend on a library extractor and loader class, so the current design is very similar to the one of the bullet-physics, the application checks for the variant and extracts it from its folder.

EDIT:
If the added jars (that contains native objects) are merged on the production with the user jar then my current extraction system will work !

1 Like

I managed to implement the base api, here is the PR:

And an initial test on my local machine (Linux-amd-x86-64-debian):

package com.jme3.alloc;

import java.nio.ByteBuffer;

public final class TestLibrary {

    public static void main(String[] args) {
        final ByteBuffer buffer = NativeBufferAllocator.createDirectByteBuffer(100000);
        System.out.println(buffer);
        System.out.println(buffer.capacity());
        buffer.put((byte) 100);
        System.out.println(buffer.get(0) + "");
    
        System.out.println(buffer);
        System.out.println(buffer.capacity());
        System.out.println(buffer.get(0) + "");
        buffer.put((byte) 12);
        System.out.println(buffer.get(0) + " ");
        System.out.println(buffer.get(1) + " ");
        System.out.println(buffer);
        NativeBufferAllocator.releaseDirectByteBuffer(buffer);
        System.out.println(buffer.get(0) + " ");
        System.out.println(buffer.get(1) + " ");
        System.out.println(buffer);
        
        while (true);
    }
}

Output:

└──╼ $java -jar jme3-alloc.jar 
java.nio.DirectByteBuffer[pos=0 lim=100000 cap=100000]
100000
100
java.nio.DirectByteBuffer[pos=1 lim=100000 cap=100000]
100000
100
100 
12 
java.nio.DirectByteBuffer[pos=2 lim=100000 cap=100000]
-112 
0 
java.nio.DirectByteBuffer[pos=2 lim=100000 cap=100000]

There are still a couple of errors to fix on MacOS and Windows…

1 Like

Can’t you load natives from the jar without extracting?

Edit:
Ok, never mind, I googled this, and looks like extracting is required.

1 Like

Thanks for reviewing the api, i am going to reply back soon.

1 Like

You could try the package currently in the PR via:
https://github.com/Software-Hardware-Codesign/jme-alloc/suites/10645540311/artifacts/531832665

EDIT:
Running via java -jar jmealloc.jar runs the main class com.jme3.alloc.TestLibrary which will be separated into a gradle test project later…

1 Like
java.nio.DirectByteBuffer[pos=0 lim=100000 cap=100000]
100000
100
java.nio.DirectByteBuffer[pos=1 lim=100000 cap=100000]
100000
100
100 
12 
java.nio.DirectByteBuffer[pos=2 lim=100000 cap=100000]
-128 
0 
java.nio.DirectByteBuffer[pos=2 lim=100000 cap=100000]

Curious what is that -128 in index 0 after releasing the buffer.

What happens if you put data in the buffer after releasing? (you may add this to the test case)

Also, a test like TestReleaseDirectMemory would be nice to add, possibly doing in while loop with thread sleep or so.

1 Like

An undefined number, it may be a block of another memory, since we have released this, other system applications can allocate this address for other operations, however it’s accessible now as read-only, trying to write will throw a runtime cache error.

1 Like