Suggestion for fixing the JME file loader

After some consideration I think that the jme file loader should be partially re-written.



Right now all the code for loading and saving are in two classes.



The more elements the file supports the harder it is to read and edit the classes.



While cep suggested letting each class read and write itself, I think a better solution is to have a plug-in system where each class has it's own plug-in class counterpart.



For Example:



To store a node



The jme writer will call the Node writer plug-in

          The mode writer plug-in will tell the jme writer to store it's render states

                    The jme writer will call the render states plug-in's

            The node writer will then tell the jme writer to store each one of it's children

                    The jme writer will call the plug in for the Children



I am currently working on a prototype system.


Good you post about it - I want to do similar things because I want to allow storage of physics…



How far did you advance yet? (I did not start implementation yet)

I just started, The system to support the plug-ins is mostly there, I need to start wrighting the plug-ins for the standard classes.

Hm, NCSoft is working on COLLADA… dunno if they'll contribute that back, but COLLADA seems pretty complete, it's XML and supports physics if I'm not mistaken. That could potentially replace the current jME XML format.



That still leaves the need for a very optimized (speedy and small) binary format… is that what you plan to build Badmi?

no, I was trying to make a slightly slower but extremely powerfull jme file.

llama said:

That still leaves the need for a very optimized (speedy and small) binary format.. is that what you plan to build Badmi?

At least it is what I plan :)

Honestly, after having to play with JmeBinaryReader/Writer with the batch stuff, I wouldn't cry if it went away forever.

I am planing to make it much simpler to read and wright, not remove it.

I suppose with Badmis system you could make several plugins (one for XML, one for binary, etc.)

llama said:

I suppose with Badmis system you could make several plugins (one for XML, one for binary, etc.)


I guess it is possible to set up a system like that. The way I have it set up now the xml<>jme converters do not need to be changed.

I am going to wate to see what the comunity wants before I go back to programing.

We actually use Java Serialization and a zipped stream for binary at the moment (probably will change… who knows)  Works very well.

To add to Renanse's comment, and to clarify my comment… originally, we were going to use the JmeBinaryReader/Writer, and after a day on it, went the serialization way as a temporary solution. The JmeBinaryReader/Writer is horribly arcane and difficult to extend, and really become to high a cost to be worth using in our little project.

My suggestion would fix some of the problems you have with the JmeBinaryReader/Writer.

Wow! Thanks for the insight! :wink:

I suggest you make them worse. You know… because you can  }:-@

In the hope that this will clarify what I am doing:



Here is what the Node plug-in would look like:


package com.jmex.model.XMLparser2.plugins;

import com.jme.scene.Node;
import com.jme.scene.Spatial;
import com.jmex.model.XMLparser2.JmeBinaryReader;
import com.jmex.model.XMLparser2.JmeBinaryWriter;
import java.io.IOException;
import java.util.HashMap;
import java.util.Stack;

/**
 *
 * @author Yitzchak Lockerman
 */
public class NodePlugin extends SpatialPlugin{

If you want file compression and are happy to read and write all the file bytes in one hit, here is the compression technique i use





you may want to skip the size test and always compress



//compress bodyBytes into bodyBytes if compression is smaller
ByteArrayOutputStream bOut = new ByteArrayOutputStream();
GZIPOutputStream gzOut = new GZIPOutputStream(bOut);
gzOut.write(bodyBytes);         
gzOut.flush();
gzOut.close();

byte [] zippedBytes = bOut.toByteArray();
if(bodyBytes.length > zippedBytes.length) {
isZipped = isTrue;
bodyBytes = zippedBytes;
}
try {
bOut.flush();
bOut.close();
} catch(Exception e) {
}




Then the other way to decompress





//decompress dataBytes and return the decompressed bytes
public static byte []  decompress(byte [] dataBytes) throws Exception{
ByteArrayOutputStream outBuffer = new ByteArrayOutputStream ();
ByteArrayInputStream inBuffer = new ByteArrayInputStream (dataBytes);
GZIPInputStream gunzip = new GZIPInputStream (inBuffer);
byte[] buffer = new byte[1];
int n;          
while ((n = gunzip.read (buffer)) >= 0) {
outBuffer.write (buffer, 0, n); 
}
outBuffer.flush();
outBuffer.close();
gunzip.close(); 
return outBuffer.toByteArray ();
}


Gee, I hope no one starts crying over needless work once the new jME binary format makes it into cvs…

:? :?You are working on a new jme format  :? :?

Renanse,



Would be helpful if you could elaborate, although I have a hunch as to what you are referring…



Can you guestimate when this will be in cvs to save many needless work pls