OpenGL error 1281 caused by bounding box

Hi guys,



Hopefully I am posting this in the appropriate place.



I'm building an augmented reality framework which interfaces with jMonkeyEngine to provide the virtual graphics. It is based on the NyARToolkit adaptation of ARToolkit for JAVA. Here is my problem:



I am using quicktime to capture a webcam stream. This webcam stream is created as a 2d texture which is part of my scene graph. All is fine when I use basic models, such as Boxes or Teapots as my models. I see the camera stream, and the virtual models appear in front of it just fine.



However, if I apply a bounding box to my content, I get OpenGL error 1281. I have googled this and that suggests that the problem is a texture sizing issue. My machine has a Nvidia 9800GT with the latest drivers, so this should not be a hardware limitation, and I cannot understand why I should get a texture sizing issue simply by adding a bounding box.



The error is triggered when updating the camera stream texture (so basically every frame). I have tried printing the texture ID value when the error occurs, and without the bounding box (when the program works) I get 4. However with bounding box I get 0 and the error. This suggests to me that it may well be a problem with ordering in someway, as it doesn't seem that the correct texture is getting through. If any one could offer any advice at all as to what I could do to further diagnose / solve this problem it would be really appreciated.



Sorry if this post is a little scattered or hard to follow, if you need clarification just let me know!



Thanks in advance guys



Adam

Are you updating the texture every frame from the webcam? Are you extending any scene graph classes by any chance?

Hi thanks for the reply,



The texture gets updated every frame, as the update code is called in the simpleUpdate method from a class which extends SimpleGame.



The only jME classes which I am extending in relation to the texturing of this are Quad and Image. So I don't think that is the problem?



It seems really odd to me that this only happens when adding bounding boxes to simple models.

The only jME classes which I am extending in relation to the texturing of this are Quad and Image. So I don't think that is the problem?

Generally speaking, extending scene graph classes isn't a good idea. So tell me, what methods are you extending and what do you do in them?


It seems really odd to me that this only happens when adding bounding boxes to simple models.

Same here, but when you do things the engine doesn't expect, odd things start happening.

I am extending Quad to create a class "CaptureQuad" this is derived from an example in the NyARToolkit source code for using quick time libraries to do the webcam capture. Here I have the constructor:


public CaptureQuad(String name, int camWidth, int camHeight, float frameRate) {
      super(name);
      try {
         image = new QtCaptureImage();
         image.initializeCamera(camWidth, camHeight, frameRate);
         tex = new Texture2D();   
         tex.setMinificationFilter(MinificationFilter.Trilinear);
         tex.setMagnificationFilter(MagnificationFilter.Bilinear);
         tex.setImage(image);         
         tex.setApply(ApplyMode.Replace);
         TextureState ts = DisplaySystem.getDisplaySystem().getRenderer().createTextureState();
         ts.setEnabled(true);
         ts.setTexture(tex);
         setRenderState(ts);
         updateRenderState();
      } catch (Exception e) {
         e.printStackTrace();
      }
      start();
      // could, at this point, resize to fit height to current width?
   }



The update method:

public void update() {
      if(image != null) {
         image.update(tex);
      }
   }



The start method:

public void start() {
      if(image != null) {
         try {
            image.start();
         } catch (NyARException e) {
            e.printStackTrace();
         }
      }
   }



I am working on this code with a colleague who adapted the QtCaptureImage class (extends Image) from Tijl Houtbeckers' code from the JMF/FOBS/jME renderer.


public class QtCaptureImage extends Image implements QtCaptureListener {

   private static final Logger log = Logger.getLogger(QtCaptureImage.class.getName());   
   private static final long serialVersionUID = -8413968528763966076L;
   public final static int SCALE_NONE = 0;
   public final static int SCALE_MAXIMIZE = 1;
   public final static int SCALE_FIT = 2;
   private int videowidth, videoheight; // frame dimensions

   public int getVideoWidth() {
      return videowidth;
   }

   public int getVideoHeight() {
      return videoheight;
   }

   long framecounter = 0;
   long lastupdated = 0;
   private ByteBuffer buffer;

   private int pixelformat, dataformat;
   private QtCameraCapture qtCameraCapture;
   private QtNyARRaster_RGB raster;
   private boolean initAndScaleTexture = false;

   public void setSize(int cameraWidth, int cameraHeight, RGBFormat format) {
      initializeCamera(cameraWidth, cameraHeight, 30f);
   }

   public void initializeCamera(int cameraWidth, int cameraHeight, float frameRate) {      
      if(qtCameraCapture == null) {
         qtCameraCapture = new QtCameraCapture(cameraWidth, cameraHeight, frameRate);
         try {
            qtCameraCapture.setCaptureListener(this);
            this.raster = new QtNyARRaster_RGB(cameraWidth, cameraHeight);
         } catch (NyARException e) {
            e.printStackTrace();
         }
      }

      pixelformat = GL.GL_RGB;
      this.setFormat(Format.RGB8);
      dataformat = GL11.GL_UNSIGNED_BYTE;

      this.videowidth = cameraWidth;
      this.videoheight = cameraHeight;

      try {
         int size = Math.max(cameraHeight, cameraWidth);

         if (!FastMath.isPowerOfTwo(size)) {
            int newsize = 2;
            do {
               newsize <<= 1;
            } while (newsize < size);
            size = newsize;
         }
         this.width = size;
         this.height = size;

         data.clear();
         data.add( ByteBuffer.allocateDirect(size*size*4).order(ByteOrder.nativeOrder()) );
         initAndScaleTexture  = true;
      } catch (Exception e) {
         e.printStackTrace();

      }
      synchronized (this) {
         this.notifyAll();
      }
   }

   public boolean update(Texture texture) {

      synchronized(SyncObject.getSyncObject()) {

         if(buffer == null) {
            return false;
         }
         buffer.rewind();

         if(initAndScaleTexture) {
            scaleTexture(texture);
         }

         GL11.glBindTexture(GL11.GL_TEXTURE_2D, texture.getTextureId());
         GL11.glTexSubImage2D(GL11.GL_TEXTURE_2D, 0, 0, 0, videowidth, videoheight, pixelformat, dataformat, buffer);



         try {
            Util.checkGLError();
         } catch (OpenGLException e) {
            log.info("Error rendering video to texture. No glTexSubImage2D/OpenGL 1.2 support?");
            System.err.println(e.getMessage());
         }

         lastupdated = framecounter;

         return true;
      }
   }



   private void scaleTexture(Texture texture) {
      texture.setScale(new Vector3f(videowidth* (1f / this.width),videoheight * (1f / this.height),1f));
   }

   public void onUpdateBuffer(byte[] pixels)
   {
      synchronized(SyncObject.getSyncObject()) {
         if(buffer == null) {
            buffer = BufferUtils.createByteBuffer(getWidth()*getHeight()*3);
         }
         try {
            buffer.clear();
            buffer.put(pixels);
            getRaster().wrapBuffer(pixels);
         } catch (NyARException e) {
            e.printStackTrace();
         }
      }
   }

   public void start() throws NyARException {
      qtCameraCapture.start();
   }

   public QtNyARRaster_RGB getRaster() {
      return raster;
   }
}



Is this an incorrect way of doing things? I am relatively new to jME to be honest, as I said much of the code above is adapted from other projects aiming to solve the same goals, so I could always alter it if there is a better way around?

Cheers

Adam