public GeoMap(int width, int height, int maxval) {
- this(ByteBuffer.allocateDirect(width*height*4).asFloatBuffer(),null,width,height,maxval);
+ this(BufferUtils.createByteBuffer(width*height*4).asFloatBuffer(),null,width,height,maxval);
}
public FloatBuffer getHeightData(){
@@ -187,7 +187,7 @@
* Copies a section of this geomap as a new geomap
*/
public GeoMap copySubGeomap(int x, int y, int w, int h){
- FloatBuffer nhdata = ByteBuffer.allocateDirect(w*h*4).asFloatBuffer();
+ FloatBuffer nhdata = BufferUtils.createByteBuffer(w*h*4).asFloatBuffer();
hdata.position(y*width+x);
for (int cy = 0; cy < height; cy++){
hdata.limit(hdata.position()+w);
@@ -199,7 +199,7 @@
// Force the limit, set the cap - most number of floats we will use the buffer for
vertices.limit(numberOfFloats);
@@ -236,7 +237,7 @@
// There are 3 floats needed for each vertex (x,y,z)
final int numberOfFloats = vertices.size() * 3;
final int byteBufferSize = numberOfFloats * Float.SIZE;
- FloatBuffer verticesBuffer = ByteBuffer.allocateDirect(byteBufferSize).order(ByteOrder.nativeOrder()).asFloatBuffer();
+ FloatBuffer verticesBuffer = BufferUtils.createByteBuffer(byteBufferSize).order(ByteOrder.nativeOrder()).asFloatBuffer();
// Force the limit, set the cap - most number of floats we will use the buffer for
verticesBuffer.limit(numberOfFloats);
And here a better idea for the AssetCache, this one does not define anything itself but is abstract implementation, Assetkey is changed to give a hint to the usedAssetcache what it should do
KeepAlways,SmartCache,KeepNever, however the implementation is free to do something else.
Also there is a defaultimplementation that SimpleApplication now uses that simply caches everything in a HashMap.
The Implementation can be changed by useing prior starting the application
if (eventListener != null)
eventListener.assetLoaded(key);
@@ -263,17 +246,7 @@
// object o is the asset
// create an instance for user
T clone = (T) key.createClonedInstance(o);
-
- if (key.useSmartCache()){
- if (smartKey != null){
- // smart asset was already cached, use original key
- ((Asset)clone).setKey(smartKey);
- }else{
- // smart asset was cached on this call, use our key
- ((Asset)clone).setKey(key);
- }
- }
-
+
return clone;
}
@@ -350,9 +323,8 @@
* @return
*/
public Shader loadShader(ShaderKey key){
- // cache abuse in method
// that doesn't use loaders/locators
- Shader s = (Shader) cache.getFromCache(key);
+ Shader s = (Shader) AssetCache.getImplementation().getFromCache(key);
if (s == null){
String vertName = key.getVertName();
String fragName = key.getFragName();
@@ -364,7 +336,7 @@
s.addSource(Shader.ShaderType.Vertex, vertName, vertSource, key.getDefines().getCompiled());
s.addSource(Shader.ShaderType.Fragment, fragName, fragSource, key.getDefines().getCompiled());
- /**
- * Enable smart caching for textures
- * @return true to enable smart cache
- */
+
@Override
- public boolean useSmartCache(){
- return true;
+ public CacheMode cachePriority(){
+ return CacheMode.SmartCache;
}
@Override
Index: src/core/com/jme3/asset/AssetKey.java
===================================================================
--- src/core/com/jme3/asset/AssetKey.java (revision 7837)
+++ src/core/com/jme3/asset/AssetKey.java (working copy)
@@ -45,7 +45,10 @@
* This class should be immutable.
*/
public class AssetKey<T> implements Savable {
-
+ /**
+ * This is to determine how the selected AssetCache behaves for this type of ressource, note that it is up to the implementation to decide this it merly a hint
+ */
+ public enum CacheMode{KeepAlways,SmartCache,KeepNever};
protected String name;
protected transient String folder;
protected transient String extension;
@@ -125,17 +128,8 @@
* @return True if the asset for this key should be cached. Subclasses
* should override this method if they want to override caching behavior.
*/
- public boolean shouldCache(){
- return true;
- }
-
- /**
- * @return Should return true, if the asset objects implement the "Asset"
- * interface and want to be removed from the cache when no longer
- * referenced in user-code.
- */
- public boolean useSmartCache(){
- return false;
+ public CacheMode cachePriority(){
+ return CacheMode.SmartCache;
}
EmpirePhoenix said:
Actually this is the the second thing I want to make, I want to give each Assettype a rating how much memory it usualy takes (aka Shader < Models < Textures) or similar, also I want to get how often a specific asset is used (could be counted over the get method, and how long the last use is away) out of this I then want to determine wich ones should be freed first. (Other approach would be to let the programmer determine with a additional load flag if the asset should stay in memory as long as possible or if it could be freed anytime)
At this point we are somewhat going away from the garbage collection based cache and instead move to a more custom one. E.g. a least recently used cache that prioritizes on asset size.
Well take a look at the second commit, it is a AssetCache suggestion that allows anyone to write their own prefered way of caching assets. I plan something similar for the MemoryManager, so that everyone can choose if they want a FullGC on out of memory or a background thread or whatever.
Ok, more on my issues since I could not talk about them before as I was in a rush.
In general, I think the point of the current AssetManager behavior is a good one. It is doing something that we cannot otherwise do in our own code.
If I want to load fifty different models that all use the same texture then I have the nice feature that all of the textures will be the same instance. Because asset manager is managing the loading, I could not do this myself without a lot of trickery.
If I stop using those fifty models and the texture is no longer used then it goes away. All is right with the world. If I want to keep it around, I can hold a reference to it in some other cache that uses whatever smarts I want.
Smart caches are rarely one size fits all and some applications will not need them at all. They’ll either be doing their own (like me) or just not care.
When AssetManager uses weak references then it does not need to handle smart caching at all as any traditional smart cache will work just fine. Hold a reference to an asset and it never goes away. If some other model tries to load it then it will get the pre-existing instance. Everyone wins whether you’ve rolled your own cache or used ehcache or something else. JME could provide a smart cache for you but I think it feels wrong to include it in asset manager.
If you embed a smart cache into AssetManager you increase the number of default assumptions you have to make and potentially greatly increase the API… for a class that didn’t need it. Moreover you have to worry about the different AssetManager implementations and whether they all interact with the cache in an appropriate way.
Regarding direct memory, the direct memory managed by JME is not a good metric for how much JVM direct memory is available. There could be any of a half-dozen common reasons that lots of direct memory is allocated and managed outside of JME control. The only good and reliable way to implement a smart cache is to give the caller parameters allowing them to set the optimal pool size, maximum pool size, etc.
That direct buffer stuff is way out of line and is too hard for us to handle.
However we can implement the LRU cache and control exactly when an asset is freed. I was thinking something like 60 seconds after its garbage collected, remove it from the cache
→ abstract MemoryManager that can be extended dependign on own needs
→ abstract Assetcache that can be extended depending on needs
→ For simpleapplication default assetcache that just caches everything
→ For simpleapplication default MemoryManager that makes full gc on outofdirectmemory and then retries one more time before throwing a out of memory.
→ AssetKeys now use a abstrac enumeration where they can request what they prefer regarding caching (Don’t Cache, Smartcache,Always cache) The AssetCache then can take care of these hints or ignore them depending on implementation.
Note this patch contains all of the above adjustments, and at least under asun jvm it will not crash with out of memory if it can be avoided by freeing directmemory.
Object o = AssetCache.getImplementation().getFromCache(key);<br />
if (o == null){
AssetLoader loader = handler.aquireLoader(key);
if (loader == null){
@@ -252,8 +236,7 @@
// do processing on asset before caching
o = key.postProcess(o);
if (key.shouldCache())<br />
cache.addToCache(key, o);<br />
addToCache(key, o);<br />
if (eventListener != null)
eventListener.assetLoaded(key);
@@ -263,17 +246,7 @@
// object o is the asset
// create an instance for user
T clone = (T) key.createClonedInstance(o);
-
- if (key.useSmartCache()){
- if (smartKey != null){
- // smart asset was already cached, use original key
- ((Asset)clone).setKey(smartKey);
- }else{
- // smart asset was cached on this call, use our key
- ((Asset)clone).setKey(key);
- }
- }
-
+
return clone;
}
@@ -350,9 +323,8 @@
* @return
*/
public Shader loadShader(ShaderKey key){
- // cache abuse in method
// that doesn't use loaders/locators
- Shader s = (Shader) cache.getFromCache(key);
+ Shader s = (Shader) AssetCache.getImplementation().getFromCache(key);
if (s == null){
String vertName = key.getVertName();
String fragName = key.getFragName();
@@ -364,7 +336,7 @@
s.addSource(Shader.ShaderType.Vertex, vertName, vertSource, key.getDefines().getCompiled());
s.addSource(Shader.ShaderType.Fragment, fragName, fragSource, key.getDefines().getCompiled());
public GeoMap(int width, int height, int maxval) {
- this(ByteBuffer.allocateDirect(width*height*4).asFloatBuffer(),null,width,height,maxval);
+ this(BufferUtils.createByteBuffer(width*height*4).asFloatBuffer(),null,width,height,maxval);
}
public FloatBuffer getHeightData(){
@@ -187,7 +187,7 @@
* Copies a section of this geomap as a new geomap
*/
public GeoMap copySubGeomap(int x, int y, int w, int h){
- FloatBuffer nhdata = ByteBuffer.allocateDirect(w*h*4).asFloatBuffer();
+ FloatBuffer nhdata = BufferUtils.createByteBuffer(w*h*4).asFloatBuffer();
hdata.position(y*width+x);
for (int cy = 0; cy < height; cy++){
hdata.limit(hdata.position()+w);
@@ -199,7 +199,7 @@
- /**
- * Enable smart caching for textures
- * @return true to enable smart cache
- */
+
@Override
- public boolean useSmartCache(){
- return true;
+ public CacheMode cachePriority(){
+ return CacheMode.SmartCache;
}
@Override
Index: src/core/com/jme3/asset/AssetKey.java
===================================================================
--- src/core/com/jme3/asset/AssetKey.java (revision 7837)
+++ src/core/com/jme3/asset/AssetKey.java (working copy)
@@ -45,7 +45,10 @@
* This class should be immutable.
*/
public class AssetKey<T> implements Savable {
-
+ /**
+ * This is to determine how the selected AssetCache behaves for this type of ressource, note that it is up to the implementation to decide this it merly a hint
+ */
+ public enum CacheMode{KeepAlways,SmartCache,KeepNever};
protected String name;
protected transient String folder;
protected transient String extension;
@@ -125,17 +128,8 @@
* @return True if the asset for this key should be cached. Subclasses
* should override this method if they want to override caching behavior.
*/
- public boolean shouldCache(){
- return true;
- }
-
- /**
- * @return Should return true, if the asset objects implement the "Asset"
- * interface and want to be removed from the cache when no longer
- * referenced in user-code.
- */
- public boolean useSmartCache(){
- return false;
+ public CacheMode cachePriority(){
+ return CacheMode.SmartCache;
}
@Override
Index: src/jbullet/com/jme3/bullet/util/DebugShapeFactory.java
===================================================================
--- src/jbullet/com/jme3/bullet/util/DebugShapeFactory.java (revision 7837)
+++ src/jbullet/com/jme3/bullet/util/DebugShapeFactory.java (working copy)
@@ -45,6 +45,7 @@
import com.jme3.scene.Node;
import com.jme3.scene.Spatial;
import com.jme3.scene.VertexBuffer.Type;
+import com.jme3.util.BufferUtils;
import com.jme3.util.TempVars;
import java.nio.ByteBuffer;
import java.nio.ByteOrder;
@@ -177,7 +178,7 @@
// The number of bytes needed is: (floats in a vertex) * (vertices in a triangle) * (# of triangles) * (size of float in bytes)
final int numberOfFloats = 3 * 3 * numberOfTriangles;
final int byteBufferSize = numberOfFloats * Float.SIZE;
- FloatBuffer vertices = ByteBuffer.allocateDirect(byteBufferSize).order(ByteOrder.nativeOrder()).asFloatBuffer();
+ FloatBuffer vertices = BufferUtils.createByteBuffer(byteBufferSize).order(ByteOrder.nativeOrder()).asFloatBuffer();
// Force the limit, set the cap - most number of floats we will use the buffer for
vertices.limit(numberOfFloats);
@@ -236,7 +237,7 @@
// There are 3 floats needed for each vertex (x,y,z)
final int numberOfFloats = vertices.size() * 3;
final int byteBufferSize = numberOfFloats * Float.SIZE;
- FloatBuffer verticesBuffer = ByteBuffer.allocateDirect(byteBufferSize).order(ByteOrder.nativeOrder()).asFloatBuffer();
+ FloatBuffer verticesBuffer = BufferUtils.createByteBuffer(byteBufferSize).order(ByteOrder.nativeOrder()).asFloatBuffer();
// Force the limit, set the cap - most number of floats we will use the buffer for
verticesBuffer.limit(numberOfFloats);
EmpirePhoenix said:
Note this patch contains all of the above adjustments, and at least under asun jvm it will not crash with out of memory if it can be avoided by freeing directmemory.
For the record, it is a sun VM that I use that hard-crashes when attempting to GC in a loop when out of memory errors occur. But only sometimes. Heap dump to disk and everything.
So what now will we use the patch I suggested (or a similar interface) or not? I want a decision as these problem is far to important to just ignore it.!!
Well ok, I would still like however to have a way to use a own implementation, so if it is not to complex it would be nice to have a interface for that.
Well, one thing that’s nice is that if AssetManager is keeping weak references properly… then you can easily implement your own cache. If asset manager is doing something smarter, it’s potentially stupid for some class of app… since smart for one app may not be smart for another. Having asset manager do smarter caching is convenient but is is possible to do it outside of it as well until something like that is in place.
I’ve been pushing to fix the weak reference problems first and then put a cache in once those issues are sorted. Since as I understand it right now the heap grows and grows and grows… which is bad. Better to get that under control first, in my opinion.
Well the heap never was a problem for me actually,a s it crashes with out of directmemory way sooner.
Still I think best would be to have some kind of interface for a AssetCache, and allow each developer to set the one they think fit best. The behaviour described above could be used for a default one when no custom is set, as it will probably work good enough for most application types.
First, I agree with you about a pluggable component.
But, there are two separate issues to me.
make sure that assets are only loaded once if they are still around. If I load fifty objects with the same texture I should make sure that the texture is only loaded once. AssetManager is the only thing that can do this and weak referencing is a reliable way to make that happen.
make sure assets stay around for some period based on cache policy. This can and should be implemented in parallel to the above because even if your cache says it’s ok for an asset to go, if it’s still strongly referenced then it can still be reused when loading new assets. A naive implementation potentially ignores this and accidentally reloads assets because the expiration routine is too aggressive.
If (1) is working then (2) is easier. It can be informed by the weak references but otherwise is a separate component that can be more easily plugged in and out and use whatever expiration policy it chooses without destroying asset sharing.
And today’s asset cache is based on weak references (when it works that way).
If you based another one on a straight LRU cache, you could be needless reloading assets that are still strongly referenced somewhere. My point is leave the existing weak referenced cache because it is the weakest cache possible and makes sure things are 100% shared if loaded. A smart cache operates next to this one to keep things longer based on some implemented expiration policy.
Yeah basically my #2 point is based on a hybrid LRU + garbage collector based cache. Essentially when there are no strong references to an asset from user code, it will be removed from memory after X seconds or some other criteria.
As long as we have ainterface I don’t really care how the default implementation is, (But I would kinda dislike being forced to LRU as this would give hugh problems for me when changing levels)