Change available colours

Hey jMonkey community :slight_smile:

I did not really use jMonkey before and didn’t really look into it, but I would like to do so in the near future.

I just want to know if there is the possibility to change the colours available on the finally rendered frame.
I have just a few colours, like 12-16, and I know the RGB values of these. Now I want an image rendered with only these colours. It should then maybe chose the nearest colour of the available ones for every pixel’s colour in an actual RGB render.
Or can I maybe only use these certain colours in the material and render it shadeless? This way there shouldn’t be any other colours in the render…

Maybe someone can advise me :slight_smile:
Erik

Yes

There is a material thats not doing any lighting, i suggest to look up the tutorials + the examples

1 Like

A filter would do this quite easily I imagine, although you would need to write the filter since afaik none of the existing ones do this.

1 Like

Ok, thanks for your answers :slight_smile:

Since I know it’s at least possible now, I will look into it.

Oh, I’ve got another question which may seem a bit weird.

My final display has non-square pixels. This has to be considered in the rendering process. The viewport has to have a different aspect-ratio as the renderes bitmap (which will then be a bit stretched). Is it possible to accomplish that?

You’d add a transform to the scenegraph-to-screen path.
I.e. I’d look in Camera or ViewPort - dunno which of them would work, and if both work, which would be preferrable, because that part of the setup always “just worked” for me and I never checked the details.
Technically, adding a Transform to the rootNode would work, too, but that’s going to make your scenebuilding experience miserable unless you’re thinking in nonstandard coordinates all day and calculate coordinate-transformed rotation quaternions in your head.

1 Like

If you setup the camera with the right values then you can change the aspect ratio… if that’s what you mean.

Well, I have to change the aspect ratio of the rendered image with the aspect ratio of the camera staying the same or vice versa. Thus the rendered image will be stretched to compensate for my displaying device where the pixel aren’t squares but rectangles.

Can you even change the aspect ratio of the rendered image?
I’d have thought that’s what the Camera is for.

@zerocool said: Well, I have to change the aspect ratio of the rendered image with the aspect ratio of the camera staying the same or vice versa. Thus the rendered image will be stretched to compensate for my displaying device where the pixel aren't squares but rectangles.

I have trouble having this make any sense.

For a camera, you have a projection matrix. This is turning real coordinates into whatever screen coordinates you need. It cares not for square pixels. If you set the aspect ratio different than the physical dimensions of the viewport then you will get rectangular “pixels” in the sense that neighboring pixels will now render the same stuff. Or even the other way around.

You are being kind of cryptic… which could mean we don’t understand what you are doing but it also could be an indicator that you don’t understand what you are doing.

Okay, let me try to explain it again. My monitor has non-square pixels. If I would now display a normal bitmap there it would be stretched. That’s logical, isn’t it?
Of course I want the Image to look right on my monitor, so so the image which is displayed on it has to be stretched the same amount, but the other way.

The viewport/camera should have the same aspect ratio as my monitor so that if i have a square in my scene it also is a square on my monitor.
Now let’s say my Monitor has 200x100 pixels but is a square (aspect ratio of 1:1 rather than 2:1).
My camera now also has a square viewport but the rendered image must still have 200x100 pixels.
You understand my problem now?

Yes, but I don’t think you understand our answers.

If you have a square monitor that is 200x100 pixels then you set your aspect ratio to counter this… say 1. The camera will be rendering to 200x100 pixels as if it were really 100x100. Since normally the aspect ratio would be 2 for those pixels but you want them to appear as 1:1 so you’d set the aspect ration to 1/1, or 1.

Someone looking at this on a normal screen would see everything stretched out of shape.

1 Like
<cite>@zerocool said:</cite> The viewport/camera should have the same aspect ratio as my monitor so that if i have a square in my scene it also is a square on my monitor.

It’s the other way round: The camera aspect ratio needs to be the inverse of the pixel aspect ratio so that the two cancel each other out.
I.e. if the pixels have double the height, the camera display needs to be vertically squished to half the size.
(You also need to factor in the actual screen aspect ratio, of course. I.e. nonsquare pixels on a nonsquare screen means multiplying and/or dividing values to get the combined transform.)

I’d suggest experimenting with aspect ratios now. You know where to look, and you know what the relevant factors and divisors are, you know the overall form of the transform to apply. You can determine the right calculation by simply trying out all variants, it’s not that many. If you test things on a monitor with square pixels, just make sure that it must look squished in the direction that the pixels are extended, that’s all.

@toolforger said: It's the other way round: The camera aspect ratio needs to be the inverse of the pixel aspect ratio so that the two cancel each other out. I.e. if the pixels have double the height, the camera display needs to be vertically squished to half the size.

…I’m pretty sure I got it right.

Normally, for a 200x100 pixel display the aspect ration would be 2 (width/height). If you want it to look right on a square display then you’d have an aspect ratio of 1 (100/100 or whatever). That would cause an effective 100x100 image to be rendered to a 200x100 camera… which will then get squashed back to a square by the hardware.

In other words, if you rendered a square 50x50 units then it would be 100x50 on the camera and look like 50x50 again on the display.

For what it’s worth, the actual formula for camera aspect ration should be something like this in this case:

Given:
Pixel resolution of screen pWidth, pHeight
Actual physical dimensions of the screen sWidth, sHeight (in centimeters or whatever… doesn’t matter as long as you are consistent)

Pixel ratio: pRatio = (sWidth/pWidth) / (sHeight/pHeight) ie: pixels per cm / pixels per cm

…pRatio is the physical with of a pixel, sort of, relative to a height of 1.

So actual camera aspect ration is then: (pWidth * pRatio) / pHeight

…and if you do the math to reduce that… you get, tadah:
aspectRatio = sWidth/sHeight

Because the pixels are irrelevant to aspect ratio of the camera.

Physical-size aspect ratio (when talking about the aspect ratio of a single pixel) is one concept.
Pixel-count aspect ratio (when talking about a square screen) is another one. Numerically inverse actually.
That’s bound to be confusing, at least to those who’re still wrapping their mind around it.

But if you want the aspect ratio to look right on a particular screen then you take the physical width and divide it by the physical height and then pass that to setPerspectiveFrustum().

We usually get away with using pixels because our monitors have square pixels… so passing cam.getWidth()/cam.getHeight() is acceptable. When the pixels aren’t square then just pass the physical ratio. Pull out a ruler if necessary.

1 Like