Binary files

I have a lot of trouble reading binary files I created in a Javascript-app.

The scenario is this: I use a ThreeJS app to create a polysphere model with over 100K points. The model can with no problems at all be exported to a GLB and imported in my JME3 app.

I also create some typed arrays with meta-data about every point and it’s relations to neighbouring points. This is needed for navigation on the sphere. I dumped this data into a binary file and put it in the asset ‘Models’ folder.

The JS Code:

downloadBlob = function(data, fileName, mimeType) {
  var blob, url;
  blob = new Blob([data], {
    type: mimeType
  });
  url = window.URL.createObjectURL(blob);
  downloadURL(url, fileName);
  setTimeout(function() {
    return window.URL.revokeObjectURL(url);
  }, 1000);
};

But I can not figure out how to load it from my assets and use it in my JME3 app.

This is my loading code:

    int[] readIntData(String _file) {
        //Read objects or arrays from binary file "o.dat":
        try {
            ObjectInputStream ois = new ObjectInputStream(new FileInputStream( _file));
            try {
utils.LOG("Read "+ois.toString())   ;             
                return (int[]) (ois.readObject());
            } catch (ClassNotFoundException e) {
utils.LOG("Class not found:"+e.getMessage());
                return null;
            }
        } catch (IOException e) {
utils.LOG("IO Error:"+e.getMessage());            
            return null;
        }

    }

I need to feed it an absolute address, otherwise it dies on File not found. But when the file is found, it still dies on ‘invalid stream header: 01000000’.

I figured there is a mismatch with the way I dump binary stuff in JS and the way Java wants it. So I am sort of back at square one… so how could I get this scenario to work?

And how do I get the data to load like an asset, from the asset folders?

You can create your own AssetLoader (implement AssetLoader interface). And register it via:

getAssetManager().registerLoader(ThreeJSLoader.class, "dat");

For binary files, check that you are reading it with the correct endianess… That is about all the hints I can think of right now.

ObjectInputStream is only for reading data written with Java’s ObjectOutputStream.

It will not work for anything else.

If you want to read a file that’s a bunch of ints then read a bunch of ints. This is not a JME issue but a Java issue. I don’t know what format that JavaScript is using as my JavaScript is so weak that I couldn’t even see how your code snippet was even writing data.

…in any case, there should be numerous web sites online that can help you read data in Java.

You probably want DataInputStream instead the ObjectInputStream…

Tnx, DataInputStream worked. Now I need to dive into creating a custom assetloader.

It’s really not necessary.

You can get essentially the same effect by grabbing your data as a class resource. (That’s all asset manager is doing.)

Not necessary but if I understood correctly that you have your models as those .dat files and you want to convert them to jME models on runtime. AssetLoader is seamless integration to jME. Just load the asset to your scene graph as you would any other.

And you can even utilize AssetLocators to download your .dat files from the Internet automatically when requested.

This might not be a super viable setup in a production environment. But I don’t really know what you are building anyway. You have so many options with jME.

1 Like

That worked!

This code is a static method in my utils-class that helps Classes load int[]-arrays from data resources.

    static int[] readIntData(String _file, Class requester) {

        try {

            InputStream instr = requester.getClassLoader().getResourceAsStream(_file);
            DataInputStream dataIn = new DataInputStream(instr);

            int length = instr.available() / 4;

            utils.LOG("Reading " + length + " ints from " + _file);
            int[] arr = new int[length];
            for (int i = 0; i < length; i++) {
                arr[i] = dataIn.readInt();
            }

            return arr;

        } catch (IOException e) {
            utils.LOG("IO Error:" + e.getMessage());
            return null;
        }

    }

Note that inst.available() can return less than the total data left. For files, it’s usually based on the file size but for anything buffered (and even the file system in some cases), it may just return a value up to the internal buffering size.

If it’s working for you then just keep that in mind, it may break in the future and only read part of the data.

Options:

  1. write the length of the array to your file when you save it so that it can be the first thing you read.
  2. full read the bytes from the inputstream first by buffering and then use that to read your ints once you know the size (for example, read it a fixed block at a time and write it to a ByteArrayOutputStream then use that byte[] array in a ByteArrayInputStream wrapped in your DataInputStream).

(1) is definitely the easier option if you control the format.

Edit: also note: if you still read your files from the resource directly and your files are of any reasonable size then wrapping your InputStream in a BufferedInputStream before passing it to DataInputStream can make reading many times faster.

Last but not least, don’t forget to close the input stream at the end of the reading process. In the example code you showed this statement is not present :laughing:

Tnx both of you. I tend to go quick and dirty, and then forget about it. I better fix both issues right away.

There is another problem though: the JS script wrote 1, 2, 3, 4, 6305, 8385, 0, 2, 3, 129, 2145, 2146, 0, 1, 2145, 4225, 6305, -1, 0, 1[…]

The Java app red 16777216, 33554432, 50331648, 67108864, -1592262656, -1054867456, 0, 33554432, 50331648, -2130706432, 1627914240, 1644691456, 0, 16777216, 1627914240, -2129657856, -1592262656, -1, 0, 16777216[…]

…so where did that go wrong?

Maybe “big endian” vs. “little endian”… but it also could be that the javascript is writing floats or something.

If you have the .dat file, use hexdump to look at the raw bytes to see what the first ‘1’ value is in the first 4 bytes.

This is how the twenty first values look in hex:
1000000, 2000000, 3000000, 4000000, a1180000, c1200000, 0, 2000000, 3000000, 81000000, 61080000, 62080000, 0, 1000000, 61080000, 81100000, a1180000, ffffffff, 0, 1000000

The original values were:

00000001, 00000002, 00000003, 00000004, 000018a1, 000020c1, 00000000, 00000002, 00000003, 00000081, 00000861, 00000862, 00000000, 00000001, 00000861, 00001081, 000018a1, -00000001
00000000, 00000001

I did some comparing:

00 00 18 a1 , 00 00 00 01 , 00 00 20 c1
a1 18 00 00 , 10 00 00 00 , c1 20 00 00

I think it is very clear this is a problem with endianness. So next thing is to find the solution. Anyone a suggestion on how to tackle this on the java side if things?

1 Like

Meaning, you read them in Java and just dumped their hex values?

Yeah, I was more interested in the byte output of hexdump but this is enough to diagnose an endian problem at least.

DataInputStream is meant for reading data from DataOutputStream and so assumes “network byte ordering”, ie: what Java normally expects for everything, ie: “the opposite of intel chips”.

Since you already have to deal with the available() issue somehow then you might be reading in all of the bytes now. If you put them in a ByteBuffer then you can set the endianness when viewing it as an IntBuffer.

Something like:
IntBuffer ints = byteBuffer.order(ByteOrder.LITTLE_ENDIAN).asIntBuffer()

1 Like