HORRIBLE fps after upgrading my video card?

Hi,

I just swapped an old GTX 650 Ti (currently ranking 64th) for a Radeon R9 270X (currently ranking 20th) and I have HORRIBLE fps. I noticed in the Catalyst control panel that there are many settings marked as: “Use application settings”

The exact same app, same view was giving around 70fps on my older GTX 650 Ti and now gives a VERY, VERY POOR 6-7fps… something’s really not right.

I was wondering if those 10x slower fps could be caused by missing JME3/LWJGL app configuration code that I forgot to put in my app initialization?

Anybody care to give me some ideas as to what’s happening and how I can fix it so that my future game testers don’t have this problem? (is it related directly to all AMD/Radeon cards, I don’t think so, because one of my friend has a Radeon HD 6850 and was getting a good 50fps throughout… so WTH?)

@.Ben. said: Hi,

I just swapped an old GTX 650 Ti (currently ranking 64th) for a Radeon R9 270X (currently ranking 20th) and I have HORRIBLE fps. I noticed in the Catalyst control panel that there are many settings marked as: “Use application settings”

The exact same app, same view was giving around 70fps on my older GTX 650 Ti and now gives a VERY, VERY POOR 6-7fps… something’s really not right.

I was wondering if those 10x slower fps could be caused by missing JME3/LWJGL app configuration code that I forgot to put in my app initialization?

Anybody care to give me some ideas as to what’s happening and how I can fix it so that my future game testers don’t have this problem? (is it related directly to all AMD/Radeon cards, I don’t think so, because one of my friend has a Radeon HD 6850 and was getting a good 50fps throughout… so WTH?)

What shaders are you using for different materials? ATI’s are pretty specific about how things are handled for OpenGL… not to mention they have a lovely “Let’s not report the actual issue to developers” policy for GLSL related errors :wink:

EDIT: Actually, a list of the post processing you are using would be helpful as well.

I don’t think it’s AMD / ATI related, as I got one as well and pretty high fps as expected.
Maybe check for any nvidia driver left overs? :confused:

Hi @netsky08

Sébastien, there is no nVidia driver left over. It’s a very clean and simple W7 setup. I tried to uninstall the latest AMD Catalyst driver taken from the website, rebooted and installed the “old” (not that old but…) driver from the CD that was packaged with the card (I never do that, I always try the latest driver from the website first) and it made no difference. Still at 10fps instead of 70fps. Also to note: Even if I disable all materials and use the testing material, you know the mesh one with only lines, the fps is still STEADY at 10fps, no matter where the camera aims, it’s like the card is on IDLE at 10% and does not engage in a performance state or something. I really feel it’s a setting in JME3 that I have to activate for the card to “unleash”. Sébastien, could you copy/paste your initialization function contents here so that I can see if I’m missing some settings for my app to fully use the GPU to its maximum performance?

I don’t really want to force the settings in the Catalyst control center, because I fear NOBODY from the public (future testers/gamers) will go there to mess around with settings, you know they’re mostly all on “Use application settings” so that’s why I really doubt that there is something JME3 has to tell LWJGL to speed up the GPU.

Hi @t0neg0d

Chris, I’m only using WaterFilter, HeightBasedTerrain, Lighting.j3md you know, everything commonly used on these forums. As I said earlier, even if I use the mesh line material (the one that only draws lines instead of filling out triangles) it’s still exactly the same very very slow performance: 10fps.

EDIT: for your edit, I’m using Bloom, LightScatteringFilter, ShadowFilter, VolumeLightFilter, WaterFilter, SkyControl… nothing really fancy…

Thank you so much guys, keep the ideas coming please! :smiley:

Hi again, I have disabled every single post processing filters I had listed above and it’s running at 76fps… that’s roughly what I was getting with my old GTX 650 Ti WITH EVERYTHING activated. WTF?

@.Ben. said: Hi again, I have disabled every single post processing filters I had listed above and it's running at 76fps... that's roughly what I was getting with my old GTX 650 Ti WITH EVERYTHING activated. WTF?

Does the JME log say that it is actually using the new card? Maybe you have a built-in mother board card that it is using?

I don’t know, man. Seems weird to me.

It’s not a “JME setup issue”, though.

Just because I am not sure of your machine’s config, can you give us a little info on the this? i.e. Memory, GPU Memory, etc…

You are using some fairly process intensive controls and filters… Not saying that this is the issue, but the first thing I would try is removing all post filters and then adding one at a time (not collectively, add one, test… remove that one, add another, test) until you track down which of these is killing the ATI card’s ability to render.

There are so many variables currently, that there is no way to narrow down where the problem is. All I know is that ATI cards are finicky and likely the problem is in one of these filters.

EDIT: This should include the water filter.

Hi Paul,

This gaming motherboard has no video output so it’s impossible that it’s causing this issue. The log states my card alright:

Running on jMonkeyEngine 3.x
Extraction Directory: C:\Users\Ben\AppData\Roaming.jmonkeyplatform\nightly
Lwjgl 2.9.0 context running on thread LWJGL Renderer Thread
Adapter: aticfx64
Driver Version: 8.17.10.1242
Vendor: ATI Technologies Inc.
OpenGL Version: 4.3.12458 Compatibility Profile Context 13.200.0.0
Renderer: AMD Radeon R9 200 Series
GLSL Ver: 4.30
Audio Device: OpenAL Soft
Audio Vendor: OpenAL Community
Audio Renderer: OpenAL Soft
Audio Version: 1.1 ALSOFT 1.15.1
AudioRenderer supports 64 channels
Audio effect extension version: 1.0
Audio max auxilary sends: 4
Returning hash code of content
Checking page id 1 386 958 573 vs stored id 1 386 958 573

The log should show the settings you are running, too… that would have been useful for us also. Windowed or full screen?

This new rig scores 7.9 out of 8 in all categories in Windows Performance Score (lol… not that I ever cared for that spec) :wink:

OK so here goes:

Motherboard = 990DXA-UD3
CPU = FX8350 8-core 4ghz
RAM = 16GB 1866mhz dual channel
GPU = Gigabyte Radeon R9 270X OC edition 1100mhz core / 5500mhz RAM / 4GB (wtf seriously…)
SSD = 128GB ADATA SATA III 6gps
HDD = 1TB SATA III 6gps (lol like it made any difference)
Windows 7 SP1 all updated

All parts are new from 1 week ago. I also found out something VERY strange. In the Catalyst control center, under the GPU performance, I can see real time fan, temp, etc… and this is what I can read in Windows:

NOTICE how VERY low the values are compared to the 1100mhz core / 5500mhz RAM the website and box state. It’s like many times slower than it’s supposed to be… So obviously it’s for saving energy, so I checked again while running a million triangles in OpenGL just to make sure it was bursting to its full potential only to discover that it’s stuck to those very low values here-after!.. why?!

inb4 I get it that people are saying Intel/nVidia combos are generally faster in everything, but that’s a budget gaming rig… If it weren’t for such POOR GPU performance, I’d be totally happy with my 800$ rig.

@pspeed said: The log should show the settings you are running, too... that would have been useful for us also. Windowed or full screen?

What I posted is the full log contents. Is there a setting in the SDK that I have to enable somewhere to get the full log? I switched it to FINE, restarted the SDK and it’s still only showing what I copy/pasted in the log. Where can I get the information you need?

What happens if you try to run an opengl game or benchmark? try http://unigine.com/products/heaven/

with my old Ati card (7759 or something like that) i decided to switch from ATI to nvidia when running this benchmark because with latest drivers I had like 30 fps with opengl and 100 with directX.
I had to revert back to 1 year and a half drivers to have similar performance.
Ati policy toward opengl is just “yeah…just make it work guys…nobody uses opengl anyway”.

Anyway…just me ranting about ATi, if netsky says he has decent perfs it must be something else.
Try the benchmark and see what results you have.

Hi Rémy,

Oh yeah, very good call, I forgot about doing this. I’m downloading it right now. I’ll post results back on here in 5 minutes.

// MY own ranting version while the benchmark is downloading lol…

BTW your ranting is what most people are talking about, that AMD is dead, but you know, am I wrong or if it weren’t for OpenGL, Playstations would not exist? And Macs?.. It’s like you’re saying: Well… just because they lack money to support development, I should be encouraging the big opponent instead, anyway it’s all gonna be over soon. Haven’t we seen that before? Beta vs VHS… Microsoft vs Apple… OpenGL vs DirectX… XBOX vs Playstation… Nintendo vs Sega… I mean, isn’t that sad that we all know it’s gonna all be over soon even tough those smaller companies had better ideas, but less business vision? Isn’t what jMonkey is all about? Open development for the poorer people out there? Are you implying it’s not worth anything to develop on your open platform versus purchasing a licence from UDK and giving them 25% of all sales afterwards? Why botter with JME when UDK offers so much more? Most of the time, I like to hang around with smaller companies. I feel like there is hope for a better future and it’s certainly not worth encouraging those greedy bastards.

Am I crazy or what?

OK so the benchmarks are done. I can say that both ran very smoothly, but DirectX was 15fps faster overall. It’s not near what I get in JME3 that’s for sure. Here are the side by side benchmarks on HIGH quality:

Keep the ideas coming guys please :smiley:

@.Ben. said: Hi Rémy,

Oh yeah, very good call, I forgot about doing this. I’m downloading it right now. I’ll post results back on here in 5 minutes.

// MY own ranting version while the benchmark is downloading lol…

BTW your ranting is what most people are talking about, that AMD is dead, but you know, am I wrong or if it weren’t for OpenGL, Playstations would not exist? And Macs?.. It’s like you’re saying: Well… just because they lack money to support development, I should be encouraging the big opponent instead, anyway it’s all gonna be over soon. Haven’t we seen that before? Beta vs VHS… Microsoft vs Apple… OpenGL vs DirectX… XBOX vs Playstation… Nintendo vs Sega… I mean, isn’t that sad that we all know it’s gonna all be over soon even tough those smaller companies had better ideas, but less business vision? Isn’t what jMonkey is all about? Open development for the poorer people out there? Are you implying it’s not worth anything to develop on your open platform versus purchasing a licence from UDK and giving them 25% of all sales afterwards? Why botter with JME when UDK offers so much more? Most of the time, I like to hang around with smaller companies. I feel like there is hope for a better future and it’s certainly not worth encouraging those greedy bastards.

Am I crazy or what?

We can rage about ATI’s poor OpenGL support but there is little we can do about. It was a total joke in the 2000s decade so it’s at least improving. That ATI seems to care little about OpenGL support is just a fact of life, though. So we unfortunately must work around it all the time and then recommend a different vendor if asked our opinion.

It is a shame, though. I have to wonder if the PS4 would have been OpenGL based if AMD already had decent OpenGL drivers. Perhaps that was at least part of Sony’s reasons for rolling their own graphics API for it.

It seems pretty clear to ME that OpenGL is the way forward… but when most of the game engine dollars go to prebuilt game engines like UDK, etc. then I think it’s reasonable for a GPU maker to assume that for most use-cases, those toolkit vendors will take care of the special wiring like they always have.

@.Ben. said: What I posted is the full log contents. Is there a setting in the SDK that I have to enable somewhere to get the full log? I switched it to FINE, restarted the SDK and it's still only showing what I copy/pasted in the log. Where can I get the information you need?

I don’t know, the SDK used to dump info about the AppSettings but maybe it doesn’t anymore. At least for the app I just tried it didn’t.

Maybe you can tell us what settings your are running with. (The ones in the settings dialog.)

I don’t get it tough. My friend has an AMD gaming rig almost identical (in terms of AMD stuff vs Intel or nVidia), as a matter of fact, it’s LESS powerful, like instead of the FX-8350 4ghz, he has the FX-8150 3.6ghz, instead of 16GB 1866mhz, he has 8GB 1600mhz and most importantly, instead of the R9 270x 4GB, his graphics card is an old HD 6850 and about the same featured motherboard. I mean, there is kind of a big gap in between our 2 rigs, but they’re both fully AMD rigs. What f**** throws me down is he gets about 40fps on my JME3 app. It makes absolutely NO SENSE that I’m only getting 10fps on this brand new gaming rig. There has to be something that explains what’s happening, I just can’t understand this.

@pspeed Paul, here is my initialization function so that you can EXACTLY see how I launch my app (most of this is useless, but it’s in full)

[java]
public static void main(String[] args) throws IOException {
// Detect screen resolution
GraphicsDevice device = GraphicsEnvironment.getLocalGraphicsEnvironment().getDefaultScreenDevice();

    // Configure app
    AppSettings settings = new AppSettings(true);
    settings.setTitle("Test 7g"); // Uses 1.5GB - 2.6GB
    
    // Icon
    BufferedImage[] icons = new BufferedImage[] {
        ImageIO.read( Main.class.getResource("/128.png") ),
        ImageIO.read( Main.class.getResource("/32.png") ),
        ImageIO.read( Main.class.getResource("/16.png") )
    };
    settings.setIcons(icons);
    
    // Mute INFO log messages
    Logger.getLogger("").setLevel(Level.SEVERE);
    
    Main app = new Main();
    app.setShowSettings(false);
    app.setDisplayStatView(false);
    app.setSettings(settings);
    app.start();
    
    Display.setLocation(0, 0);
}

[/java]

have you tried not getting the drivers from the ati website and instead use the drivers that windows distributes? the windows update will be distributing a slightly older version but I tend to find that they work better.

Usually the drivers you get on the ati website as the “bleeding edge uber beta” drivers that they put out to “compete” with nvidia who does the same thing. Generally I find they arent so reliable.

additionally the drivers you get from ati or nvidia directly come bundled with their crapware, which could potentially be running with some settings like power saver mode or customized quality settings and other stuff which I think is all kind of annoying.

Maybe dump the settings to the console just to be sure the defaults aren’t crazy.

And/or let the settings dialog show up so that you can more easily play with different settings to see if they matter.

@.Ben. said: I don't get it tough. My friend has an AMD gaming rig almost identical (in terms of AMD stuff vs Intel or nVidia), as a matter of fact, it's LESS powerful, like instead of the FX-8350 4ghz, he has the FX-8150 3.6ghz, instead of 16GB 1866mhz, he has 8GB 1600mhz and most importantly, instead of the R9 270x 4GB, his graphics card is an old HD 6850 and about the same featured motherboard. I mean, there is kind of a big gap in between our 2 rigs, but they're both fully AMD rigs. What f**** throws me down is he gets about 40fps on my JME3 app. It makes absolutely NO SENSE that I'm only getting 10fps on this brand new gaming rig. There has to be something that explains what's happening, I just can't understand this.

If you are used to nVidia where every driver is based on the same core then it may be a surprise to know that ATI drivers tend to be different for every card. And certainly the older cards use a completely different driver model.

…but if someone else’s machine is getting better performance then maybe there is just something screwy with your config.

My dislike of ATI doesn’t come lightly. It’s based on 20 years of repeatedly being shown what a pain they are… from my first All-in-wonder card to today’s “Shader failed to link, not going to tell you why.” errors.