COLLADA importer references

Hello everybody.

I do not understand how COLLADA 1.4 xml file importer succeeds to load files with only one pass. COLLADA node can publish an identifier or a scoped identifier in order to allow to reference this nodefrom another one.



As I can see, ColladaImporter.java has map named “resourceLibrary”. This map records identifier/node couple on the fly of processing. This map make it possible to resolve the encountered reference bay a simple ‘get(id)’ call.



I’m surprised because I though two pass were required. The first one to buid an identifier map, an a second on to parse the file. Well, ColladaImporter.java succeeds in parsing the file in one pass.



It looks link the code is based upon the following assertion :

“all reference to a node are located after the referenced node in the xml document”





Do I miss something important ?

Does someone know if khronos specified this rule in COLLADA 1.4?

Is it a standard “de facto” ?



Thanks for your help !


sensei said:

I'm surprised because I though two pass were required. The first one to buid an identifier map, an a second on to parse the file. Well, ColladaImporter.java succeeds in parsing the file in one pass.

It looks link the code is based upon the following assertion :
"all reference to a node are located after the referenced node in the xml document"


You are right.That's the reason two pass algorithm is required.
Current version of ColladaImporter doesn't work well. (One reason for this is one pass algorithm)

I am using my own version of ColladaImporter and it works through two pass.
At first, it builds map for node/animation and next it interpret document.
(It is modified version of jME ColladaImporter. But it lacks of many featuers and is not guaranteed to work compatible to COLLADA spec.)
I don't know COLLADA format exactly, but it seems to be no way to implement correctly without two pass algorithm.

Thanks for your response.



( I will maybe propose to the community a new version of ColladaImporter because I encounter several limitations with the current implementation. )


And what the status is? Have you made any fixes to ColladaImporter?

First I sent a mail to Feeling Software, they learned me that Feeling Software is not currently working on COLLADA 1.5 since none of their paying clients have requested it. So it is not necessary to invest on COLLADA 1.5 for now (for your information COLLADA 1.5 is not compatible to 1.4).



Well, I saw XML binding was generated with XML spy.

It does not parse element’s content (such as float sequence), It does not expose iterators on arrays, and finally I found no way to implement the first pass (the one dealing with id map) without writing a lot of code dedicated to parse the XML bean structure.



I whould like to change the parsing side of the COLLADA importer. jME code of COLLADA importer looks fine to me.



I started to lookup a XML binding technology and I focused on JAXB 2 implementation bundled with Java 1.6.

JAXB unmarshalling process submit all “xml beans” to a listener which is a perfect mechanism to implement the “first pass” with no assertion on the structure. I started to read xml,xsd, JAXB specs and hack a prototype.



Unfortunately I suffered a lot of hardships. I posted an SOS on jaxb forums, please check it http://forums.java.net/jive/thread.jspa?threadID=58247&tstart=0



I will post a thread when I will have a working prototype, benchmarks, and global design proposal. For now I’m trying to make JAXB parse my file, and I need the help of your comunity. If you have any clue please let me know.

JAXB issue is now solved. I'm going to port the current code on the new XML Binding infra, keeping the same level of functionality for now.



Thanks for all.

IMO, xmlbeans is just as powerful but much less quirky and lighter weight.

I agree with you. JAXB specification is complex, JAXB utilization is painful, an performances/scalability are not so good.
You raise a question on which I would like to have the community feedback: Which XML API use to parse COLLADA ?
I wiped out raw SAX because resolving COLLADA references with the stream of event would have required too much effort, and I'm not sure to be smart enough to produce a relevant implementation.
I wiped out raw DOM because I supposed (not proved) the memory footprint of the DOM tree (with float list stored as stream) would not be optimal compared to a memory data structure (where float list are stored as arrays) such as the one produced by bindings frameworks.
I looked after xml bindings frameworks I identifier JiBX, Apache XMLBeans, JAXB as outsiders. (Check http://www.ibm.com/developerworks/xml/library/x-databdopt2/). The point is that JAXB is included to java framework whereas JiBX & XMLBeans are not. I think jME have to be light. I don't want to add new dependences to jME webstart. That's why I started to prototype a JAXB implementation. Does the community agree with this reasoning?

I don't see any recent mods to these classes.  Are you working on a local copy?  If so, maybe we could get a dedicated branch for this.  That would allow for efficient cooperative dev, and the admins could merge into trunk when, how, and if they see fit.

You are right, I work with a local project completly disconnected from jME. I'm focused on the infrastructure: resolving internal and external references, be compliant with java jar URI & COLLADA URI. Still under define :).
I'm not ready to publish the code because I'm still learning URLStreamHandlerFactory api. I will publish a first revision of the code, as soon as I have set up the pure COLLADA infrastructure (dont know when).

What kind of collada input files are you working with (like generated with what modeler program)?

For now I use the kronos COLLADA database. I will use some feeling software MAX export.

  Have you done anything with importing armatures or animations/actions, or do you have any guesses about the feasibility of same?

Not for now.

sensei said:

JAXB issue is now solved. I'm going to port the current code on the new XML Binding infra, keeping the same level of functionality for now...


Send me a PM if you need any more Jaxb help.  IMO, xmlbeans is just as powerful but much less quirky and lighter weight.

I ran into most of the issues you describe about a year ago.  I gave up because I couldn't dedicate the time needed to master Collada format without any help, and help was not forthcoming.  Congratulations on your accomplishments.

I don't see any recent mods to these classes.  Are you working on a local copy?  If so, maybe we could get a dedicated branch for this.  That would allow for efficient cooperative dev, and the admins could merge into trunk when, how, and if they see fit.

What kind of collada input files are you working with (like generated with what modeler program)?  IIRC, the Blender Collada exporter has serious bugs and limitations.  A few people, including myself, tried to pick it up from the company who aborted it, but couldn't get any cooperation.  I need to look into that again, because (a) I think I remember seeing some updates to the Blender exporter module, and (b) I've learned a good amount more about the Blender scene DOM since then.  Have you done anything with importing armatures or animations/actions, or do you have any guesses about the feasibility of same?

sensei:



I just now got my remote-side Collada exporter working again.



How would you like to unload the parsing step to me, allowing you to concentrate on Collada/JME integration?  I can code up a generic JavaBean with getters and setters (using J2SE Collections and JME types) without a binding infrastructure.  This would be about 19 x more efficient than the binding approach, and the API to this bean will be exactly what you want, instead of accommodating the over-engineered binding interfaces.  I would be surprised if anybody on the JME team would object, since this will require much less change to the JME code base.



I can do this today if you reply in the affirmative ASAP.  I.e., I can get code to you today that you can use.

blane:  Thank you for your help !


I'd agree with your criteria if we really need full marshalling and unmarshalling to Java classes, but I doubt we do.  Unless you are making extensive OOD use of the generated interface classes, I don't think the great increase in complexity and deliverable size is justified.

Well, I am trying a OOD approach since JAXB allows me to do. Every binding class holding an ID or a SID implements dedicated interfaces. But OOD is not a requirement.

I just now got my remote-side Collada exporter working again.

What is the remote-side Collada exporter? Do you mean you developed an exporter?


I can code up a generic JavaBean with getters and setters (using J2SE Collections and JME types) without a binding infrastructure.  This would be about 19 x more efficient than the binding approach, and the API to this bean will be exactly what you want, instead of accommodating the over-engineered binding interfaces.

Why not. Please tell me more about xml parsing API you are going to use. Can you explain me why creating beans by hands is simpler than JAXB generation solution (I does not know very well xml parsing solutions).
Does it allow resolving id/sid links? Even in <Extra> elements?

XML parsing with JAXB works fine on simple model, and I would like to test the reliability of the JAXB solution on. I also want to try your proposal. Does it represent an important amount of work to produce your beans ?

sensei said:

blane:  Thank you for your help !
...
I just now got my remote-side Collada exporter working again.

What is the remote-side Collada exporter? Do you mean you developed an exporter?

I'm talking about a modeler-side exporter here.  The modeler program writes the *.dae file that we want to import.  Theoretically, we want to support the 4.0 spec, but gragmatically you are most interested in supporting kronos and MAX; I'm most interested in supporting Blender.



I can code up a generic JavaBean with getters and setters (using J2SE Collections and JME types) without a binding infrastructure.  This would be about 19 x more efficient than the binding approach, and the API to this bean will be exactly what you want, instead of accommodating the over-engineered binding interfaces.

Why not. Please tell me more about xml parsing API you are going to use. Can you explain me why creating beans by hands is simpler than JAXB generation solution (I does not know very well xml parsing solutions).
Does it allow resolving id/sid links? Even in <Extra> elements?


It supports everything you can see in the XML file, including comments and even white-space if that is desired.  Can link with any attributes.  WRT the specific tool, and whether the linking is done with traditional Java or automatically by the tool, that depends on the requirements (see final paragraph below).  Either way, the JavaBean will produce data with the internal linking that you need.

Creating beans by hand is simpler because it is simple JavaBean coding which every Java developer is capable of, and that is the entire implementation effort (except for adding a single new jar to the classpath).  The binary distribution is increased by just the JavaBean class, and nested classes which are trivial DO's with fields for native Java types, objects, and collections thereof.  The interface to the generated data is a simple JavaBean with methods for exactly what you need.

A binding solution requires a JAXB expert to manage the binding settings/configs, requires multiple new build steps (generating interfaces and stubs, compiling, etc.), adds dozens of interfaces and classes to the binary distribution.  The interface to the generated data is so complex that you need to generate a Javadoc API spec to navigate the structures you will get.


XML parsing with JAXB works fine on simple model, and I would like to test the reliability of the JAXB solution on. I also want to try your proposal. Does it represent an important amount of work to produce your beans ?


The issue is not what will work-- any XML parsing tactic will work-- but is the extreme increase of complexity and size for XML binding justified here.

PM (Private Message) me a list of the nodes and attributes you want to import (I'll assume all attributes if you do not specify).  IIRC, the Collada spec accommodates a bunch of stuff that does not appy to JME.  Depending on the size of your list, I can get you a working bean in a couple hours.  ...   Assuming you get me the requirements while I still have time to work on it.

Thanks for sending the requirements.



I've been doing some work with JME-native persistence, and it occurs to me that the XMLImporter class is accomplishing the same parsing goals.  I could definitely make a parser quicker with JDom or XStream, but using the com.jme.util.export.xml classes would be better in the long run since that would add NO extra dependencies, and would increase consistency.



Any tips or advice from com.jme.util.export.xml authors or users would be appreciated.