Machine-based model loader?

I've been working this idea out in my head for a while and would like to develop it as something to contribute to jME but I'm afraid I'm going to need a little insight into how to go about this.  In Battlefield 2 based on your settings for performance and screen resolution it "optimizes shaders" for your machine which I'm assuming consists of generating files from the high quality shaders that get written to a cache on the hard drive and then can simply be read each time thereafter which decreases load-time and increases the user experience to the maximum for your machine's specs.

What I want to do is to have high quality models with high quality textures and based on settings the user specifies or determined machine performance it can pre-load those high quality models and use LOD to reduce the detail to the most efficient for your PC and reduce the quality of the textures for your PC and then write that information back out to disk to be re-used in the future.  Is this as straight-forward as it seems?  Is this a good idea or is there a better way to go about this?

To me this seems great because my models can all by extremely high-quality and for those machines that can handle it, they'll get the best graphics possible, but the game is still very nice for lesser machines as well.

(What follows is opinion, certainly not every game is the same)  :mrgreen:

  1. "Optimizes Shaders" sounds more like selecting the proper shaders from the (probably) huge list that has been written and included on the CD. There are so many paths to go down with shaders: ATI vs nVidia, Shader 3.0, 2.0, 1.0, etc. Once these shaders are selected, they may be compiled for faster access. I'd be surprised if it's programatically generating shaders from some master shader file.

  2. What I'm guessing you want to do (it wasn't apparent to me from your post: "pre-load those high quality models and use LOD to reduce the detail" is what I'm interpreting) is load a high quality model and reduce the poly-count programatically (via CLODMesh or similar)? If so, no, in my opinion this is not a good solution at all. First, you will have artists generating these models, and if you reduce the poly-count, they will be appalled by the results (rightly so, too, they can do a much better job). Using programatic reduction of polygons is never going to produce the results you expect, and therefore, they are never going to be quite what you want. And since you are going to be seeing these models up close you won't get away with it. Even tools designed for years for poly reduction (3DSMax, Maya, etc) require artist manipulation to get right. Why remove one triangle from the foot that causes it to look like a flipper, when you can remove one triangle from the hair and not notice it, etc. Secondly, if you are using a skin and bone solution, you can get odd artifacts if you start reducing/moving edges that are linked to bones. Third, you are going to have huge memory overhead for no reason. You'll be loading a high quality model, creating a VET Table or similar, reducing the polys for a lower quality model.

  3. Why not simply deliver a low/medium/high model and load the proper one as needed? If this is web deliverable and you are concerned about bandwidth, then even better because you can develop a streaming based solution that will download the appropriate quality model as needed. If you don't need the high quality models, you don't have to download nearly as much.

  4. You mention delivering the high quality models with high quality textures. Are you going to reduce those textures programmatically? Again, why not deliver low/med/high as needed? Loading in a high quality texture, reducing it and swapping sounds like a long way to go with not much benefit that I can see.

Very well reasoned response.  You're right, it makes more sense to have this done by the modeler, not programmatically.

Okay, so forget about this system…I guess this is why such a system has yet to be created. :wink:

Thanks for the insight.

<useless musing>

I wonder if you’d end up with stick-figures if the computer generated the low detail models itself…?  How is Spore dealing with things like that?  Maybe all the components have high, low, and medium versions and the game is just storing what to do to them to recreate the player’s creature in those detail levels.

</useless musing>

Such a system may not be useful for poligon reduction, but I think that many games do programatically resize textures, however.

As an example, I think that Quake does not have resized versions of all textures, but allows the user to use low resolution textures to save video memory.

note that if you offer a high/med/low option to the users, they will tend to try them out which would result in downloading the whole content.

also if you don't provide that option to the user and your game fails to approximate the actual speed of the machine, you end might up with good machines which run in lo mode or slow machines running in high mode.

edit: darkfrog, what's with that avatar? did you use plastic surgery to abandon your amphibian looks? :slight_smile:

Well, I was trying to find something offensive towards monkeys to post, but that was the best I was able to come up with.  I searched for "fire monkey" but surprisingly I find very little.  I thought about going to the zoo and doing my own "amateur photography" with some gasoline and matches, but thought better of it in the end. :wink:

Oh well…I guess mojo lucks out this time. :slight_smile: