So I have been digging though the jme code, trying to understand how light intensity works.
The wiki was enlightening
You can increase the brightness of a light source gradually by multiplying the light color to values greater than 1.0f.
But unfortunately I am still in the dark. (OK, enough puns for one post).
I get that a color, say new ColorRGBA(2f, 2f, 2f, 1f) will be brighter than new ColorRGBA(1f, 1f, 1f, 1f)
But I am struggling to figure out how this correlates to a lumens or watts measurement. Does anyone know the history behind how lights were implemented in jme and what/if that color value correlates to anything?
A Lumen is a unit of absolute amount of light from a given source. It is not exactly brightness. (That would be ‘Lux’ or Lumens per square Meter.)
Light levels in a digital image or scene are expressed in arbitrary units from 0 meaning “Too little to Register” up to MAX_VALUE of the datatype, meaning “Too high to Count.”
Note: this is true even of photos taken of a real world scene. A single pixel might have a wide range of values, depending on the Apareture of the lens, Shutter speed, and the sensitivity (ISA) of the sensor chip. This is why we bracket several real exposures together to create HDR images of real scenes.
The actual brightness you get out of your screen will depend on the specific screen, it’s size, etc. but they’re still working on the basic NO_LIGHT to AS_MUCH_AS_I_CAN model. (Very oversimplified. Gamma correction, Contrast ratios, etc. all tie into what that particular pattern of pixels look like, as well as what extra math is done in the video driver)
The numbers that are stored in a ColorRGBA are really only relative to one another, kind of the way that vertex positions are measured in units. They don’t really mean anything. (I know that it’s common to model real world objects and set physics forces such as if each unit = 1 meter, but that’s just a convention. They could equally be a foot, inch, or angstrom. Whatever is most convenient for the detail that you are modeling!)
the hdr.glslib stuff is all about converting to and from Radiance hdr data format, which is trims a set of three floats down to three 8-bit mantissas sharing an 8 bit exponent. Much smaller to store, but harder to do math on.