Improvements: geometry, shaders

In hope that we will have a jME 2.1, i'm posting the description of improved geometry handling. This functionality is working, is tested, the source code is available, and is original work made by me. In its current state it is not compatible with jME 2.0, but if jME 2.1 is started out, i would like to contribute these classes as the base Geometry classes. Quoting the question from the other thread, what do i have to do to get these into jME, and, will jME 2.1 be written?



BaseGeometry, Geometry

The Geometry is holding the data structures describing the triangulated meshes of models, but in this solution it is not part of the scenegraph. Geometry is referenced in the scenegraph trough TriBatch, but its storage and management isn't part of the scene. A static Geometry object can be referenced multiple times from the scene, so any number of instances of Geometry can be rendered, without need to copy any data associated with the Geometry. Once constructed, the Geometry should not be changed if it is to be used multiple times, there is possibility to use per-instance geometry for example for CPU animated models.



Arangement of vertices data in jME: each jME 2.0 TriMesh holds one IntBuffer as index buffer, and a number of FloatBuffer-s for each supported vertex attribute.



Geometry in this solution holds references to an IndexBuffer, and a number of VertexBuffer-s. Reference to vertices data in Geometry is generic, it has no

Improving shader support:



My improvements were aimed at these areas:

  1. Better management of shaders. Shaders are independent of the models they are applied to, they can be applied to multiple models, they are cached, and are managed similarly to texture resources.
  2. Improved support for per-batch parameters for shaders. GLSLShaderObjects is split into ShaderObjects and ShaderParameters, ShaderObjects represents a shader, and ShaderParameters represents the uniforms, attributes, vertex attributes and texture parameters associated with the model used by the shader.
  3. Handling of shader vertex attributes in LWJGLRenderer, equally as fixed function vertex attributes
  4. Handling of shader textures without TextureState. To a shader it is not relevant into which texture unit is the texture loaded, it just needs the parameter uniform associated with the texture unit.



    ShaderObjectsState

    Loads the GLSL shader sources and binds the shaders. Does not handle uniforms and attributes. Once created, it is not changed, it has nothing changeable. Contains a reference to a ShaderKey. The purpose of the ShaderKey is to identify identical shaders, and to reuse shaders. The ShaderObjectsState can be applied to any number of models, there is no need to clone the object to use it multiple times.



    ShaderParameters

    Has no reference to the shader, just to the shader parameters. Based on application it may be reused multiple times, or every model has its own instance. The ShaderParameters is shader-independent. It is possible to render an object with X shader and the parameters, then to render with Y shader and the same parameters. Note that parameters not present in a shader are silently ignored. The purpose of this behavior is to allow rendering with multiple shaders and a single set of parameters. For example, depth pass rendering with depth-only shader,  then transparent rendering, then main rendering using lights. Each of these is a different shader, but all of them use the same parameters from ShaderParameters.



public void setUniform(ShaderVariable v);



Uniforms are handled like in current jME.


public void setVertexAttribute(ShaderVertexAttribute attr);


Vertex attributes stored in Geometry can be used by the shader. Example of setting up a shader vertex attribute:


ShaderParameters sp=new ShaderParameters();
sp.setVertexAttribute(new ShaderVertexAttribute("boneWeight", VertexAttribute.USAGE_WEIGHTS));



This code describes that the
3. Interleaved modelpack for static models
All vertex attributes interleaved, and all models of pack using a single VertexBuffer. This is highly packed format, it is suitable for static geometry with few attributes (position, normal, texcoord) format. This provides streamlined processing for loading large amounts of geometry, since multiple models geometryes are put into a single file, and this file can be loaded into the VertexBuffer directly without any additional processing. This also means that the Renderer does not have to make VBO switches when rendering objects from a single modelpack.

In my research on interleaved model data I found that it is unreliable and can actually cause reduced performance on some cards. Since it is a pain to implement I would not suggest adding it to jME.



As for buffer handling, I think your solution is too complex. The assumption is made that geometry is always static unless specified otherwise- this means that a huge layer is required for accessing geometry, since it often may not be accessible by the CPU directly.

Instead here's my propsal on this:
Geometry contains a VertexBuffer object. A VertexBuffer can be locked with a call to VertexBuffer.lock() or unlocked with a call to VertexBuffer.unlock(). While a VertexBuffer is locked, you cannot access it's data. E.g a call to VertexBuffer.getPositions() will throw an IllegalStateException if it's locked. You can also set a usage mode on the buffer to allow it to be handled by the card more efficently (this would translate exactly to VBO stream, dynamic, and static modes).

A VertexBuffer can have user attributes which are mapped by strings. E.g VertexBuffer.setFloatAttribute(String name, FloatBuffer data, int size).
---

For the improved shader support preposal, I think a simplification is also possible.
The same style should be followed by shaders as by textures.
You have a ShaderManager class which has some loadShader() overloaded methods. Those return a Shader object (containing a GL program pointer and possibly a buffer), the requested shader may have already been loaded by the manager, in that case you get the cached object.
You set the shader object to a ShaderObjectsState with setShader calls:
setVertexShader(Shader shader);
setFragmentShader(Shader shader);
setGeometryShader(Shader shader);

Like you said, seperation of the shader program and it's parameters is required. I see no point however in having either ShaderVariables or ShaderParameters classes. A ShaderObjectsState is already an instance of a Shader object, so parameters should be set on it, like it is now with setUniform and setAttributePointer calls. The only changes to the ShaderObjectsState are the addition setShader* calls, The load() method, to maintain backward compatability, will be rewritten to automatically use the ShaderManager class.

void load(URL vert, URL frag){
  Shader vertShader = ShaderManager.loadShader(vert);
  Shader fragShader = ShaderManager.loadShader(frag);
  setVertexShader(vertShader);
  setFragmentShader(fragShader);
}
Momoko_Fan said:


Instead here's my propsal on this:
Geometry contains a VertexBuffer object. A VertexBuffer can be locked with a call to VertexBuffer.lock() or unlocked with a call to VertexBuffer.unlock(). While a VertexBuffer is locked, you cannot access it's data. E.g a call to VertexBuffer.getPositions() will throw an IllegalStateException if it's locked. You can also set a usage mode on the buffer to allow it to be handled by the card more efficently (this would translate exactly to VBO stream, dynamic, and static modes).


How can it be cloned when locked - do you mean read only ??
vear said:

The IndexBuffer follows the same logic. Multiple ranges of indices can be put into a single IndexBuffer, and each Geometry has pointers to where its indices begin in the buffer. The IndexBuffer has support for 16 and 32 bit indices. If the number of vertices in the Geometry is less that 2^15, then short (16 bit) indices (IndexBufferShort) can be used. Using short indices improve performance, and I didn't yet encounter any model which needed 32-bit (int) indices. Int indices may be needed for large terrain meshes, but it is wise to subdivide the terrain mesh, so that short indices are enough. Int and short indices are both supported in Geometry and Renderer, and static factory methods in IndexBuffer automaticaly choose the proper index buffer when creating it.



As long as it is informative to the user of the api as to what type of buffers they will use per instance level, looks a good idea.  Is the type of buffer defined in the constructor or is this only via a factory
How can it be cloned when locked - do you mean read only ??

It's useless to clone VertexBuffers... If you want to instance geometry, you create a new Geometry object which uses that VertexBuffer, locked or not locked, it doesn't matter. A lock on a VertexBuffer simply means it's data cannot be modified.
Momoko_Fan said:

In my research on interleaved model data I found that it is unreliable and can actually cause reduced performance on some cards. Since it is a pain to implement I would not suggest adding it to jME.


Interleaved buffers is implemented using glAttribPointer (just like now), and not with the old OpenGL interleaved vertex array support. It works perfectly in my demo, and its the same performance wise as separate buffers. It is tested on: ATI X700, x2700, nVidia 6600, nVidia 7600, and 8800, and worked without any problems. The speed increase comes from the fact that there is only one VBO in use for a modelpack, and the Renderer does not have to switch VBO-s all the time. In my demo i use only 2 VBO-s for all the static geometry. If i were using the classic jME Geometry i would potentially have 600 different batches, each with at least 3 attributes (position, normal, texcoord), those with normal mapping addition 2 (tanget, binormal). That means a total of 3000 tiny VBO-s needed to be switched when rendering the scene. Instead of that i have only 2 VBO-s. And i can load the whole geometry pack into a single FloatBuffer in one go.


As for buffer handling, I think your solution is too complex. The assumption is made that geometry is always static unless specified otherwise- this means that a huge layer is required for accessing geometry, since it often may not be accessible by the CPU directly.


My solution is not assuming that the geometry is static, it does not assume anything, it behaves like a tool. Its the programmers decision, how will he set up the buffers. If you want, you can use the old jME buffer setup just fine. Wrapper methods can be provided, which simulate the old setPositionBuffer(), setNormalBuffer, setTexCoorBuffer(). It lets the programmer choose the best setup for his scenario. If you want to render lots of instances of static geometry then you will set up a shared immutable geometry for the purpose, if you are calculating the vertices on the fly, and you don't need to read back the calculated data, then you will set up mapped buffers. If you only need to calculate the position, then you can use a mapped buffer for position, and non-mapped (possibly interleaved and packed) buffer for the rest. By ignoring these possibilities the engine wont get better.


Instead here's my propsal on this:
Geometry contains a VertexBuffer object. A VertexBuffer can be locked with a call to VertexBuffer.lock() or unlocked with a call to VertexBuffer.unlock(). While a VertexBuffer is locked, you cannot access it's data. E.g a call to VertexBuffer.getPositions() will throw an IllegalStateException if it's locked. You can also set a usage mode on the buffer to allow it to be handled by the card more efficently (this would translate exactly to VBO stream, dynamic, and static modes).


First of all, reading back from a mapped buffer is expensive. If you need read-back, then its better not to use a mapped buffer in the first-place. Even better, is to construct the data structure in advance (for example the CollisionTree), and save/load it together with the model.

Another approach (which i use in CPU skinning), is to have one source Geometry which isn't used for rendering, just as a model, i read vertices from the model geometry, transform them, and write the transformed vertices into the mapped buffer. Read access to mapped buffers should be avoided at all cost. If the data is in the mapped buffer, then it means that the programmer put it there, so why didnt the programmer put the data somewhere else?

The stream, dynamic, static flags are only hints to OpenGL, not hard rules. They affect the performance far less than opening the mapped buffer for read access. The lock() mechanism is already in place, once you lock the mapped buffer, VertexBuffer will return a null buffer (and a NullPointerException will occur subsequently) if you try to access it.

Another technique, used in conjunction with threading is double-buffering the animations:
Two VertexBuffer-s, they are flipped in-out in the rendering thread, and one of them is mapped for write access.
The other thread fills the mapped buffer with animated vertices.
The rendering thread gets the finished buffer, locks it, uses it for rendering later, and unlocks the other buffer for writing the new animation into it.


A VertexBuffer can have user attributes which are mapped by strings. E.g VertexBuffer.setFloatAttribute(String name, FloatBuffer data, int size).


I specifically avoid using strings in the core of the render loop. That's why i opted for using Enums for marking the attribute type. But VertexFormat-s can be described by strings: for example the "p12n12t0uv8" represents a format with 12 bytes for position, 12 bytes for normal, and 8 bytes for the texture coordinate 0. The Enum of supported VertexAttribute types can be easily extended, but we could also use classes instead of enums, but note that a vertex attribute is more than just a name, here is how the position is described:


public static enum Usage {
Position("p", 0, 3*4, GL11.GL_VERTEX_ARRAY),
...
}



The engine handles vertex attributes in generic way, but it still has to know if its fixed function attribute or a shader attribute. So the semantic (usage), has to be described before using an attribute. By changing the Enum to a ArrayList, it would be possible to do what you said.


---

For the improved shader support preposal, I think a simplification is also possible.
The same style should be followed by shaders as by textures.
You have a ShaderManager class which has some loadShader() overloaded methods. Those return a Shader object (containing a GL program pointer and possibly a buffer), the requested shader may have already been loaded by the manager, in that case you get the cached object.
You set the shader object to a ShaderObjectsState with setShader calls:
setVertexShader(Shader shader);
setFragmentShader(Shader shader);
setGeometryShader(Shader shader);


This is a new feature worth having. I did not yet need to link vertex and pixel shaders in multiple combinations, but this is also useful, and possible to implement. But, the created ShaderObjectState-s need to tracked anyway, because the programID needs to be tracked, and not only the created vertexShaderID and fragmentShaderID. For this reason i choose to cache the ShaderObjectState objects and not the shaders. But tracking and caching of vertex, geometry and fragment shaders is worth implementing too.


Like you said, seperation of the shader program and it's parameters is required. I see no point however in having either ShaderVariables or ShaderParameters classes. A ShaderObjectsState is already an instance of a Shader object, so parameters should be set on it, like it is now with setUniform and setAttributePointer calls. The only changes to the ShaderObjectsState are the addition setShader* calls, The load() method, to maintain backward compatability, will be rewritten to automatically use the ShaderManager class.

void load(URL vert, URL frag){
   Shader vertShader = ShaderManager.loadShader(vert);
   Shader fragShader = ShaderManager.loadShader(frag);
   setVertexShader(vertShader);
   setFragmentShader(fragShader);
}


A reason for having a single ShaderObjects Java object to represent a shader, is becasue its easyer to track it that way. There is the code in rendering which compares renderstates, and it will know that the shader does not need to be applied again, if the previously applied ShaderObjects is the same as the current one.

I separated the ShaderObjects program from its parameters to be able to apply different shaders with the same set of parameters. Let me show header of the GPU skinning shader i have:


attribute vec4 boneWeight;
attribute vec4 boneIndex;
attribute float weightNum;
uniform mat4 boneMatrix[88];



This set of parameters is required for doing GPU animation.


attribute vec3 aTangent;
attribute vec3 aBinormal;

uniform sampler2D normalMap;



This set is required for doing normal mapping.


uniform sampler2D colorMap;



And this is required for texturing.

So lets say i want to do a fast depth-only pass before rendering the model with all the lights and such. The depth-only pass renders only the depth into the depthbuffer, it does not need:

-lights
-normals for calculating lighting
-textures

The depth-only shader only needs the skinning data to be able to transform the vertices to proper positions, and to get the proper depth for the current animation of the character. It only needs the first block of parameters. The later lighting pass will need all the parameters. If the parameters were stored in the ShaderObjects, it would be necessary to know which parameter is needed for which shader. By keeping the parameters separately, each shader can take whatever parameter it needs. In turn, if the model needs to be rendered without normal mapping, it wont use the tangent and binormal, but the ShaderParameters will not need to be changed, it will simply ignore those parameters which are not needed with the current shader.
theprism said:

As long as it is informative to the user of the api as to what type of buffers they will use per instance level, looks a good idea.  Is the type of buffer defined in the constructor or is this only via a factory


It is via a factory, the classes are IndexBufferInt and IndexBufferShort, but you are supposed to use it as just IndexBuffer, it will handle the int->short casting by itself. They both behave like Buffer-s, so get(), and put() works as on IntBuffer (when i think of it now, they could be even subclassed from Buffer for greater transparency).
Momoko_Fan said:

How can it be cloned when locked - do you mean read only ??

It's useless to clone VertexBuffers... If you want to instance geometry, you create a new Geometry object which uses that VertexBuffer, locked or not locked, it doesn't matter. A lock on a VertexBuffer simply means it's data cannot be modified.


That would also be a valid way of doing it. I did instancing differently, but its another story.

I have batches (which were removed in jME 2.0), but this solution uses them differently, and for different purpose. The TriBatch in this implementation references a Geometry and a Material (yet another story). The referenced Geometry can be the same for more batches if it is static, or per-instance if it is dynamic. I have arguments why batches are good, but as Momoko_Fan described, we could do without them, although losing on functionality, but gaining on simplicity.

Although I do not understand everything being said in this thread  :roll:



Any chance of other more experienced members expressing their opinions? It seems to me that if there is the opportunity to improve jME then it should be considered… Here we have a member who has offered to do a more significant upgrade than the random patches that are mostly offered… But this needs to be also looked over by devs. Even saying that 'you have nothing against it' or 'i am not sure -i'd need to look at it' or 'I do not completely agree with your argumentation' would be something better…



In any case I see no reason NOT to create a 2.1 branch that would include more substantial changes. After all the patches that are applied to trunk can be copied over to 2.1 also… SVN makes it so easy.



But again - I do not understand everything being said, so I cannot really comment on the benefits (or lack of) of these changes.

A 2.1 branch is IMHO the right thing to do now.



2.0 should be supported for bug fixes, its time to move on now and imprrove this engine.


Any chance of other more experienced members expressing their opinions? It seems to me that if there is the opportunity to improve jME then it should be considered... Here we have a member who has offered to do a more significant upgrade than the random patches that are mostly offered... But this needs to be also looked over by devs. Even saying that 'you have nothing against it' or 'i am not sure -i'd need to look at it' or 'I do not completely agree with your argumentation' would be something better...

I don't know if I am experienced enough but IMO if all changes here are applied we would get a very monolithic geometry handling system which I find somewhat against jME's principles of being simple to use. Flexibility and control are good but not when applied to that extent.

I have removed some of the unfinished code and uploaded the rest to SVN, you can check it out from:



https://vlengine.googlecode.com/svn/trunk/



List of features:



-Most classes thread-safe (no statics, extensive use of LocalContext)

-Context system

-Extensive pass and queue system, passes operate on queues and not on the scenegraph

-Experimental collision system using mesh volume representations

-Rewritten input system (using InputListener)

-Light management (LightNode-s placed into scene, LightState-s managed by LightSorterGameState, shadows management not complete)

-Geometry handling as described in the previous posts

-FrameBuffer handling by the engine (the passes make no distinction if they render to backbuffer or to texture)

-ViewCamera: like jME AbstractCamera, but not abstract and without LWJGL dependency. Read-only CullCamera used specifically for culling.

-changes in LWJGLRenderer to uniformly handle vertex attributes, as described in previous posts

-Material: RenderState-s are not placed in the scene, but the Material decides, how the given batch is to be rendered. MaterialLibrary system for constructing materials

-Skinning and normal-mapping shaders in ShaderMaterialLib

-Some notable renderpasses:

-DepthPass: fast depth-only pass for writing the Z-buffer, so the Z-write can be disabled for later passes (utilize fast-z)

-DepthTexturePass use for creating shadow maps

-LightExtractPass: extracts visible ligths from the scene for sorting

-ColorPostProcessPass: step towards deferred shading, renders the scene texture to the backbuffer

-BloomRenderPass: bloom pass

-SSAOPass: as more people said, more an outline pass than SSAO (taken from the OpenGL.org forum)

-passes are set up in RenderPath class, and the dependency is defined among them

-resource system, converts textures automatically, loads once converted textures from the cache folder

-MD5 mesh and animation loaders, animation not implemented

-internal model format (separated from scenegraph) with helper classes for handling ModelPack-s

-OBJ loader (specific to handling with Maya OBJ exporter quicks)

-D3D X mesh loader (specific to Maya XExporter() script)

-TangentGenerator for generating Tangent and Binormal attributes for normal mapping

-the scenegraph system with some changes, generally tried removing any link to rendering specifics or triangulated meshes

-bone animation on CPU or GPU with X meshes (MD5 is incomplete)

-batches connector between the scene, the material and the geometry

-CharacterController: 3'rd person character and camera controller

-renderstates: not abstract, not all implemented, just those which i needed

-ShaderObjectsState, ShaderParameters as described in previous posts

-PropertiesIO: reads/writes the properties.cfg using reflection from fields of the Config class

-autoupdater system written by Lex, update files individually

-another autoupdater system which creates/applies ZIP-ed patch files

-some custom util classes, with some XML handler classes



If you find something useful, feel free to use/adapt. There are some things i would do differently now, but that will be another project.


Momoko_Fan said:

Any chance of other more experienced members expressing their opinions? It seems to me that if there is the opportunity to improve jME then it should be considered... Here we have a member who has offered to do a more significant upgrade than the random patches that are mostly offered... But this needs to be also looked over by devs. Even saying that 'you have nothing against it' or 'i am not sure -i'd need to look at it' or 'I do not completely agree with your argumentation' would be something better...

I don't know if I am experienced enough but IMO if all changes here are applied we would get a very monolithic geometry handling system which I find somewhat against jME's principles of being simple to use. Flexibility and control are good but not when applied to that extent.


What you mean by monolithic? Its more complicated than storing a number of FloatBuffer-s? Its similar to what Direct3D does: describe the vertex format, and provide it with the buffers. Its nothing weird, its how its done in most other engines. It can be argued, that jME tries to be simpler than most other engines, but i see a collision of interests here. I perceived jME as trying to be the best (most feature rich, most performant, most modern) game engine for Java there is. Is this shifting to some other direction? Goal on fast-prototyping, learning perhaps? What is the purpose of jME anyway?

Wow lots of useful features!

I think it makes jme more powerful and fast. right?

Is it scheduled to merged in jme?

Then when do I get this?

I can't wait  XD

He posted the SVN link

Mindgamer said:

He posted the SVN link


"Authorization Required" - vear, can we have a user or something?

Errm, what does this mean "3. Interleaved modelpack for static models

All vertex attributes interleaved, and all models of pack using a single VertexBuffer. This is highly packed format, it is suitable for static geometry with few attributes (position, normal, texcoord) format. This provides streamlined processing for loading large amounts of geometry, since multiple models geometryes are put into a single file, and this file can be loaded into the VertexBuffer directly without any additional processing. This also means that the Renderer does not have to make VBO switches when rendering objects from a single modelpack."



How comes this? You model the whole static scene into one model and export a single model? ATM, I'm using GeometryBatch created by snylt and patched by me to be jme2 compatible, and add geometries that share states into it , and so much less buffers are created upon commit - it works nicely, yet I would like to see what alternative ways exist to pack things.

Let me try to explain. The different vertex attributes in this example:



P - vertex position

N - vertex normal

T - vertex texture coord



After converting models the geometry data for a single model looks like this:



Buffer1:

P1, P2, … Pn



Buffer 2:

N1, N2, … Nn



Buffer 3:

T1, T2, … Tn



After interleaving the attributes, the buffer looks like this:



Buffer 1:

P1, N1, T1, P2, N2, T2,… Pn, Nn, Tn



Now, say we have multiple models, and each has its own Geometry object, and its own vertex buffer.



Geom 1:

Buffer 1:

G1P1, G1N1, G1T1, G1P2, G1N2, G1T2,… G1Pn, G1Nn, G1Tn1



Geom 2:

Buffer 1:

G2P1, G2N1, G2T1, G2P2, G2N2, G2T2,…G2Pn, G2Nn, G2Tn2



We can create a buffer to hold both of these geometries:



G1P1, G1N1, G1T1, G1P2, G1N2, G1T2,… G1Pn1, G1Nn1, G1Tn1, G2P1, G2N1, G2T1, G2P2, G2N2, G2T2,…G2Pn2, G2Nn2, G2Tn2



We have to remember indices where the vertex data for each of the geometries begins in this packed buffer. So in the end we will have the following layout:



VertexBuffer 1:

VertexFormat: PNT (the layout of attributes in the buffer)

Data: G1P1, G1N1, G1T1, G1P2, G1N2, G1T2,… G1Pn1, G1Nn1, G1Tn1, G2P1, G2N1, G2T1, G2P2, G2N2, G2T2,…G2Pn2, G2Nn2, G2Tn2



Geom 1:

uses VertexBuffer 1

Vertices start at position 0

There are n1 vertices in this geom



Geom 2:

uses VertexBuffer 1

Vertices start at position n1

There are n2 vertices in this geom



So as you see, there is only one FloatBuffer for the two geoms, and each one knows where its data starts in the buffer. This can be applied to creating VBO-s too. There is a single VBO created for the buffer, and both geoms use this VBO. Note that these two geoms are still separate geoms, you can render only one of them if needed, or you can render multiple times if needed. They are in no way related, but only in the storage of their vertices data. The same scheme is applicable to indices data. Of course the Renderer has to support this, so i have code in VL engine to support both interleaved and packed geometries. Since all the vertex data of multiple geometries are in a single FloatBuffer, that FloatBuffer can be saved to disk and loaded from disk in a single operation, and thats good for performance.



Hope this was clear, if not, send me a PM and i'll try to explain it in Hungarian.  :wink:

So can I assume you're a fellow Hungarian? Great :slight_smile:



Now,  lets say we have a scene with multiple models - those can be packed with this batching into a single buffer model, which then can be saved to disk as well. (The GeometryBatching is working just like your description, though it needs an already existing set of buffers to commit into the batch buffer.)



So can I assume that this kind of 3d element can be used for loading multiple separate models into the element one by one, and after that I can use it and can save it to disk and then next time it can be loaded as one.

That sounds really nice. Can it load models without precreating buffers for each of the submodels? :slight_smile:



Can you give a username for the SVN, or how can one access it anonymously? I couldnt access it, it says "Authorization Required".

sorry, now i see its on googlecode. :slight_smile: I will look into it.