Adding getPixel in Image

In this thread: http://hub.jmonkeyengine.org/groups/android/forum/topic/imagedbasedheightmap-on-android/#post-184424



We discuss adding a getPixel() method to images but since that’s over in the android forum and in an unrelated thread I thought I’d create a thread here too as this is a rather more general change.



This is my (very rough) concept for how to do it. Essentially the image class delegates to the format enum with each format implementation knowing how to process pixels for that format. For now the more complex formats can just throw a NotYetImplemented exception.



The code would look something like:



[java]

public class Image extends NativeObject implements Savable /, Cloneable/ {



public enum Format {

/**

  • 8-bit alpha

    */

    Alpha8(8) {

    public float getRed(Image image, int x, int y) {

    return 0;

    }

    public float getGreen(Image image, int x, int y) {

    return 0;

    }

    public float getBlue(Image image, int x, int y) {

    return 0;

    }

    public float getAlpha(Image image, int x, int y) {

    ByteBuffer data = image.getData().get(0);

    data.position(image.getWidth()*y+x);

    return data.get() / 255f;

    }

    },



    /**
  • 8-bit red, green, and blue.

    */

    RGB8(24) {

    public float getRed(Image image, int x, int y) {

    ByteBuffer data = image.getData().get(0);

    data.position((image.getWidth()*y+x)*4);

    return data.get() / 255f;

    }

    public float getGreen(Image image, int x, int y) {

    ByteBuffer data = image.getData().get(0);

    data.position((image.getWidth()*y+x)*4+1);

    return data.get() / 255f;

    }

    public float getBlue(Image image, int x, int y) {

    ByteBuffer data = image.getData().get(0);

    data.position((image.getWidth()*y+x)*4+2);

    return data.get() / 255f;

    }

    public float getAlpha(Image image, int x, int y) {

    ByteBuffer data = image.getData().get(0);

    data.position((image.getWidth()*y+x)*4+3);

    return data.get() / 255f;

    }

    public ColorRGBA getPixel(Image image, int x, int y) {

    ByteBuffer data = image.getData().get(0);

    data.position((image.getWidth()*y+x)*4);

    return new ColorRGBA(data.get() / 255f, data.get()/255f, data.get()/255f, data.get()/255f);

    }

    },





    // SNIP OTHER ENUMS, each would need its own implementation of the below. For now could just

    // throw unsupportedoperation for the more complex cases.



    private int bpp;

    private boolean isDepth;

    private boolean isCompressed;

    private boolean isFloatingPoint;



    private Format(int bpp){

    this.bpp = bpp;

    }



    private Format(int bpp, boolean isFP){

    this(bpp);

    this.isFloatingPoint = isFP;

    }



    private Format(int bpp, boolean isDepth, boolean isCompressed, boolean isFP){

    this(bpp, isFP);

    this.isDepth = isDepth;

    this.isCompressed = isCompressed;

    }



    public abstract float getRed(Image image, int x, int y);

    public abstract float getGreen(Image image, int x, int y);

    public abstract float getBlue(Image image, int x, int y);

    public abstract float getAlpha(Image image, int x, int y);

    // Done in enum so that implementations above can override it if there is a more appropriate implementation

    public ColorRGBA getPixel(Image image, int x, int y) {

    return new ColorRGBA(getRed(x,y), getGreen(x,y), getBlue(x,y), getAlpha(x,y));

    }



    // SNIP



    }



    // SNIPPED VARIABLES



    public float getRed(int x, int y) {

    return format.getRed(this, x,y);

    }

    public float getGreen(int x, int y) {

    return format.getRed(this, x,y);

    }

    public float getBlue(int x, int y) {

    return format.getRed(this, x,y);

    }

    public float getAlpha(int x, int y) {

    return format.getRed(this, x,y);

    }

    public ColorRGBA getPixel(int x, int y) {

    return format.getPixel(this, x,y);

    }





    // SNIPPED OTHER CODE

    }



    [/java]



    Hope that makes sense, I just threw it together in here so there are probably terrible mistakes :slight_smile:



    If people are happy with that approach I’m happy to do the first implementation and write the readers for a few of the formats but I’ve no idea what half of them are so I think the best bet would be to implement the basic/common ones and leave the others unsupported for now.



    Thoughts?
1 Like

I went a different way with what I was doing. I basically added a single getPixel method to the standard image class that different things based on the format and then created an Android specific Image class that Overrides the getPixel method.



Also, I think the concept of grayscale color format needs to be addressed somehow. If the image is greyscale (ie. Luminance8 for heightmap), I don’t think returning a ColorRGBA would be right because the calling class would not know what format the image was.



In your example above, maybe there needs to be another abstract method for getGreyscale and another getPixel method that returns a single float so that if an 8 bit format is set (ie Luminance8) then calling getPixel with a ColorRGBA return type could return null while calling getPixel with a float return type could return the appropriate greyscale value. This would also allow for RGB types to calculate the greyscale equivalent values to return for each format type.



Just my 2 cents without a lot of thought.

Yea we talked about this in the chat when the issue came up. Moving get/setPixel() to image is the ideal solution, then android can hide its hacks behind the scenes.

I’m heading on vacation for a few weeks starting tomorrow, so someone else might have to take the reins on it: @normen, @Momoko_Fan, of course if you want to implement it that would be wonderful! We can guide you and help work out an ideal solution for all situations.

@iwgeric said:
I went a different way with what I was doing. I basically added a single getPixel method to the standard image class that different things based on the format and then created an Android specific Image class that Overrides the getPixel method.


Yeah, that's the "classic" approach to the problem. It's not the recommend object oriented approach though. In virtually all cases polymorphism can and should be used to eliminate switch statements along those lines and the resulting code is both cleaner and more maintainable long term. (For example adding a new supported format is immediately obvious that you need to at least think about the format and getPixel ... whereas otherwise you need to know to hunt out the switch statement and add a new case).

The drawback is that it "bloats out" the enum list.

@iwgeric said:
Also, I think the concept of grayscale color format needs to be addressed somehow. If the image is greyscale (ie. Luminance8 for heightmap), I don't think returning a ColorRGBA would be right because the calling class would not know what format the image was.

In your example above, maybe there needs to be another abstract method for getGreyscale and another getPixel method that returns a single float so that if an 8 bit format is set (ie Luminance8) then calling getPixel with a ColorRGBA return type could return null while calling getPixel with a float return type could return the appropriate greyscale value. This would also allow for RGB types to calculate the greyscale equivalent values to return for each format type.


Well that's why my proposal has getRed returning a single float separately from getPixel returning a ColorRGBA. There is an argument to be made for a format with no colour information either returning a null or throwing either a NotSupportedException or an IllegalStateException. I guess that depends whether people think that getPixel should always return at least some value. I'd be inclined towards always returning something for supported formats but could go either way.

@Sploreg said:
Yea we talked about this in the chat when the issue came up. Moving get/setPixel() to image is the ideal solution, then android can hide its hacks behind the scenes.
I'm heading on vacation for a few weeks starting tomorrow, so someone else might have to take the reins on it: @normen, @Momoko_Fan, of course if you want to implement it that would be *wonderful!* We can guide you and help work out an ideal solution for all situations.


I'm more than happy to implement it. Just want to make I'm not stepping on anyone's toes and also that people would be happy with the implementation concept as outlined above.

Doing the code won't take long, the only thing is I've no idea what half those formats are (although some are obvious) so what I'd most likely do is an initial patch for the ones I do understand and then ask people who know more about the other formats to implement just those getters.

Should there be "setPixel" functionality as well?

Having both set and get pixel methods in Image then makes a utility to convert between any two image types trivial.

How are you thinking android would override the getPixel function? Would it have its own subclass for image that overrides the format enum? I haven’t used enums this way before (still very new to java) :slight_smile:

To be honest I’m a little foggy on exactly what the problem is in the android thing but I’m not sure it would need any override at all. We just make sure the image format is supported in getPixel and then the terrain stuff calls getRed…which then means that unless I missed something it all “just works” on desktop or android or anything else… as long as the image format supports getRed it’s all fine.

The problem with Android is that the image is created with null for the buffer data. The Android renderer uses the efficientData to get the bitmap image directly. To support getPixel for Android, we would either need to change how images get created or override getPixel so that Android can read the color of the pixel directly from the image instead of the data buffer. This is one of the reasons why I was going down the road of having an Android specific image class that overrides the getPixel method. I think supporting the data buffer for all images will cause the VM to run out of memory (at least it did when I tried it).

Ok, that makes sense.



Could you not override getData in the Image and when it is called at that point fetch the data from Android?



Use a weak reference and it can be garbage collected again when not in use.

I’d like to get in on this. I have done a similar thing, but it was flawed and I abandoned it in favor of a system of enforced image formats. It just felt slow and congested.



Basically, the system uses a main image class. This class currently has a jME image as a member variable, but it could just as well has extended it. The class (and I linked it furthest down in this post) contains the Image object and a “FormatReader” variable. The FormatReader is an interface that forces all format readers to implement three methods that returns:


  1. pixel color values as RGBA, using (x,y) location.
  2. pixels as float, using (x,y) and a color channel argument (get red, or get blue etc.)
  3. pixels as luminance (float)



    For grayscale images they just return an unsupported exception for RGBA colors. It needs to implement the method tho.



    Here’s an example - the ABGR reader.





    /**
  • An ABGR8 format reader.

    *
  • @author Andreas

    /

    public class RDR_ABGR8 extends AbstractFormatReader {



    @Override

    public ColorRGBA getColor(int position, ByteBuffer buf, ColorRGBA store) {

    buf.position( position * 4 );

    float a = byte2float(buf.get());

    float b = byte2float(buf.get());

    float g = byte2float(buf.get());

    float r = byte2float(buf.get());

    store.set(r,g,b,a);

    return store.clone();

    }



    @Override

    public float getColor(int position, Channel channel, ByteBuffer buf) {

    float f = 0;

    switch(channel){

    case Red:

    buf.position(position
    4 + 3);

    f = byte2float(buf.get());

    break;

    case Green:

    buf.position(position4 + 2);

    f = byte2float(buf.get());

    break;

    case Blue:

    buf.position(position
    4 + 1);

    f = byte2float(buf.get());

    break;

    case Alpha:

    buf.position(position*4);

    f = byte2float(buf.get());

    break;

    default:

    throw new UnsupportedOperationException(“Image does not contain channel.”);

    }

    return f;

    }



    @Override

    public float getLuminance(int position, ByteBuffer buf, ColorRGBA store) {

    buf.position( position * 4 );

    float a = byte2float(buf.get());

    float b = byte2float(buf.get());

    float g = byte2float(buf.get());

    float r = byte2float(buf.get());

    return calculateLuminance(r, g, b, a);

    }

    }





    The AbstractFormatReader contains stuff that are common to all readers, like luminance value conversions etc:





    /**
  • Format readers normally extend this class.

    *
  • @author Andreas

    */

    public abstract class AbstractFormatReader implements FormatReader {



    protected float byte2float(byte b){

    return ((float)(b & 0xFF)) * 0.0039215f;

    }



    protected float calculateLuminance(float red, float green, float blue, float alpha) {

    return (0.299f * red + 0.587f * green + 0.114f * blue)*alpha;

    }

    }





    Finally, here’s the Image class:





    /**
  • Class used for reading jME images. Borrows heavily from the terrain
  • classes (the way they set up their image reading system).

    *
  • @author Andreas

    */

    public class ImageReader {



    protected Image image;

    protected int imageWidth;

    protected int imageHeight;

    protected ByteBuffer buf;

    protected ColorRGBA store = new ColorRGBA();

    protected FormatReader fReader;



    public ImageReader(){}



    public void setupImage(Image image){

    this.image = image;

    this.imageWidth = image.getWidth();

    this.imageHeight = image.getHeight();

    this.buf = image.getData(0);

    switch(image.getFormat()){

    case ABGR8:

    fReader = new RDR_ABGR8();

    break;

    case RGBA8:

    fReader = new RDR_RGBA8();

    break;

    case BGR8:

    fReader = new RDR_BGR8();

    break;

    case RGB8:

    fReader = new RDR_RGB8();

    break;

    case Luminance8:

    fReader = new RDR_L8();

    break;

    case Luminance16:

    fReader = new RDR_L16();

    break;

    default:

    throw new UnsupportedOperationException("Image format not supported: " + image.getFormat());

    }

    }



    /**
  • Get color at position x,y.

    *
  • @param x Position x
  • @param y Position y
  • @return The colorvalue.

    /

    public ColorRGBA getColor(int x,int y){

    int position = x + imageWidth
    y;

    return fReader.getColor(position, buf, store);

    }



    /**
  • Get value of channel “channel” at position x,y.

    *
  • @param x Position x
  • @param y Position y
  • @param channel The channel
  • @return The color value as a float.

    /

    public float getColor(int x, int y, Channel channel){

    int position = x + imageWidth
    y;

    return fReader.getColor(position, channel, buf);

    }



    /**
  • Get luminance value at x,y.

    *
  • @param x Position x
  • @param y Position y
  • @return The luminance value as a float.

    /

    public float getLuminance(int x, int y){

    int position = x + imageWidth
    y;

    return fReader.getLuminance(position, buf, store);

    }

    }

Interesting, it’s a similar approach but I can see why creating an entire class for every image format seems rather cumbersome…



The implementation I’m suggesting by piggy backing on the existing enum and using anonymous implementations reduces that overhead significantly.

So is everyone happy for me to do this? Any last requests? :stuck_out_tongue:



I’ll get onto it sometime in the next few days.

I still think there needs to be a way to differentiate a color value vs a greyscale value. For instance, there is code in ImageBasedHeightMap that takes the color buffer value and calculates the greyscale value based on an equation. If all the calling class is doing is calling getPixel, how does the calling code know that the returned value is a color or just the greyscale float?



[java]

protected float calcGreyScale(ColorRGBA color) {

return (float) (0.299 * color.r + 0.587 * color.g + 0.114 * color.b);

}



ColorRGBA color = image.getPixel(x,y);

float greyscaleValue = calcGreyScale(color);

float pixelHeight = greyscaleValue * heightScale;

[/java]



How does this code know that the returned value is already a greyscale to begin with? If the original image is Luminance8, then is would be not correct to send the ColorRGBA from getPixel to the calcGreyScale method, but if the image is RGB565 or RGBA8, then you do want to send the ColorRGBA to the calcGreyScale routine.



I would vote for a seperate getLuminance method (like in @androlo example above) or a parameter to getPixel to specifically request the image class to calculate the greyscale value so that the calling class knows what is being returned.

Yes, looking at the terrain code and Androlo’s example I’d already tentatively decided to have a getLuminance method as well. This just confirms that decision.

getPixel would be awesome, (and setPixel also) I say go for it :slight_smile:

Got some time today so code is written (although untested so far). Need to do unit tests and ImageUtilities.java then I should have something to post :slight_smile:

1 Like

With getData for Android being a weak reference, would the data buffer for Android be recreated a bunch of times since there wouldn’t be a strong reference to the data buffer for more than one call to getPixel?

Only if it got garbage collected in the mean time - which is pretty unlikely. If you were worried about (for example looping through a bunch of pixels) then you could always hold your own strong reference while doing the loop.



I’m writing unit tests now, once that’s done I’ve got an ImageUtilities class I started writing a few weeks ago to finish off. That will have higher level operations and I’ll have a think about the best way to handle the reference thing cleanly at the same time.

If you don’t have an Android device to test on, let me know and I can test on it as well. I posted my recommended changes for the data buffer on the other thread

http://hub.jmonkeyengine.org/groups/android/forum/topic/imagedbasedheightmap-on-android/?topic_page=1&num=15#post-184738

I have multiple android devices but I’m happy to pass the android testing over to you :wink:

Looking forward to this a lot. It would be awesome if we could build on this and make more image tools later, based on the new design.