Any recommended new GPUs that work well with JME3?

Interesting, thanks for the tips. DAmn drivers always giving us problems :P.

I thought I read somewhere that at some point they are going to allow crossover SLI/Crossfire mixes with GPUs. I thoughjt that was mentioned for DX12, but Idk…

But if cards from the same company gives issues without being SLI then thats’ messed up lol… I guess not the same driver family or…?

Weird it worked in Linux… No OpenGL 4.0, but they allow crossover support :stuck_out_tongue: .

would it be worth getting 2 GPUs for testing purposes regardless if I use them together?

Could I install both AMD and Nvidia drivers if I only use 1 card at a time, or again driver conflicts/issues…??

Would it just not be worth it if I had to reinstall drivers all the time…?

Thanks!

Linux works, cause the drivers are not by the manufactureres, and usually stability is prefered over performance, They rather leave stuff out that half ass it for marketing :stuck_out_tongue: (3.5gb i just say)

3.5 ftw!

Apparently the 960 2gb card runs at 1.75… Someone showed 1.78 on their tests lol >(.

Yeah, the OS community is great most of the time!

I’m still curious if people would recommend me getting 1 of each type of card for dev purposes? Would the drivers mess with each other if the cards weren’t slotted together? Would each work when the single card was in or…? I would assume there has to be something that works, because there are “bench rigs” which allow for easy component swaps, so I would assume they wouldn’t stick to a single company for GPUs… But Idk…

I think it’s a good idea. Most likely the 960 and a r9 390 will be good. or maybe a 380? It seems the 390 is vastly better though…

One link I saw on “GamerNexus” did show the 960 4gb better than the 2gb, but it seems that people seem to say stick with the 2gb?

Any thoughts? Also I don’t think you mentioned what Radeon card you use?

Thanks :slight_smile:

I mean, I didn’t look, but when I see 4gb and 2gb, I think RAM. In other words, how much can you keep right on the card without having to swap it out.

It that’s what this is, since OpenGL basically handles that automatically, 4gb could give you better performance than 2gb is you try to load lots of junk up to the GPU all the time. Less swapping.

yeah, that’s the VRAM (2 and 4 gb).

I was looking at this link http://www.gamersnexus.net/guides/1888-evga-supersc-4gb-960-benchmark-vs-2gb/Page-2

and some of the benchmarks show a noticeable difference, but some the 2gb was better, but probably because it only utilized 2gb or probably less of VRAM…

Idk what to doooooooooooooo lol :frowning:

Benchmarks can be pretty misleading anyway. Especially if they are not OpenGL based.

I know at least “back in the day”, on DirectX you managed all of your memory yourself.

Yeah… true story, I’ve been saying that this entire time as well :stuck_out_tongue:

This tidbit was interesting

Assassin’s Creed Unity is the poster-child of memory capacity advantages. The game regularly capped-out our available memory on the 4GB card and fully saturated the 2GB card. This saturation results in memory swapping between system RAM and the GPU’s memory, causing the massive spikes reflected by the 1% low and 0.1% low numbers.

In this scenario, Assassin’s Creed Unity has a massive performance differential between the 4GB and 2GB options, to the point that 4GB of VRAM will actually see full utilization and benefit to the user’s gameplay. Despite similar average FPS numbers, ACU exhibited jarring, sudden framerate drops with the 2GB card as memory cycled, effectively making the game unplayable on ultra settings at 1080p. The 4GB card had an effective minimum of 30FPS and an average of 39FPS, making for a generally playable experience. Settings could be moderately tweaked for greater performance.

A 4GB card has direct, noticeable impact on the gaming experience with ACU.

So basically it comes down to… are we going to be a memory leaky game like ASsassins creed or… :stuck_out_tongue:

So I’m not sure how much VRAM JMonkey really uses, and what parts of the application use what amounts, since I’m sure we all use varying amounts of features… I’m assuming Physics and shizz like that would probably be heavy usage of something.

I don’t see using more than 2 though, but Idk…?

What are your thoughts? :stuck_out_tongue:

Also, do you think I should get 2 cards? I think I will have to figure out if AMD cards work… I wonder if integrated graphics would work… >(

Stuff that goes into VRAM:
-textures
-meshes (vertex attributes, etc.)

Physics has nothing to do with it.

With a game like Mythruna, some platforms definitely feel VRAM limits as they look around and get lag spikes as mesh data transfers in/out of VRAM. I have lots of mesh data.

Gotcha, so just graphical stuffs :stuck_out_tongue: wasn’t sure if what happens goes through that, or if it would be more CPU (physics that is)?

Mythruna is a JME game right? I’m assuming it’s a very large game at that, and wouldn’t; be an “Average” usercase?

Thanks.

Yes, Mythruna is a JME game. Any game that generates its own geometry will likely hit memory swap problems at some level. Also, more VRAM can mean more hires textures.

Don’t most, if not all games generate geometry though…? It would be interesting to monitor VRAM usage wit games, I wonder if there are monitors…

Will we hit the 2gb amount in JME I wonder tho, with a smaller game? medium>?

No. Most load their geometry as assets. Mythruna generates almost all of its content at runtime.

oh :open_mouth: , this is interesting info…

So it gets user supplied data, or grabs files from somewhere???

In my JME Project I use user supplied worlds/levels, so I have minimal assets.

I’m assuming that I fall under the Mythruna case then… >(

Personally I don’t get why it makes a difference if you are loading a mesh or generating it, but it’s true that voxel games have quite more mesh than say doom 3, because voxel games are optimized to efficiently hide the meshes and the latter has optimized maps to not even include much meshes.

VRAM is pretty simple: Think of it as an texture storage. If you (like ACU which was mentioned) have 150 textures each being 4096x4096 (+ multiple mimaps!!) you will reach VRAM Limits.

If you don’t use much textures, low res or texture atlasses even, memory won’t be that of a problem.
Hint: Simply supply multiple resolutions of the texture or maybe “forbid” the highest mipmap level, based on a user specified setting.

Note: The Textures stay in RAM aswell, so if you have less RAM than VRAM, chances are high that you waste money.

Because when you load a mesh, chances are that you reuse it a bunch of times. When you are generating a mesh, it’s a one-off just for that part.

Batching would do similar.

1 Like

No. You have assets. You are just loading them from the user.

In Mythruna, the only assets are textures (and the avatar models). All of the other content is generated at runtime. ie: create a new Mesh() and fill it was raw vertex data.

It’s a block world. So every bit of almost everything you see (including objects) is generated from a arrays of cells.

I have a R9 270 I think, but i guess a bit better cards are now in the same price range.
Anyway with my old trusty i5 2500 with it together it is enough for battlefield 4 and gta5 on nearly ultra settings.

I just saw a ton of ads at the top of this post, under my OP… Guess heavy traffic topics get ads and apparently info…

There are 59 replies with an estimated read time of 17 minutes.

Summarize This Topic…

…LOL we doing work boys… and girls…

Well some of them are meshes. I have to do some fixing, bit I had some issues since I needed multiple sided cubes, and I couldn’t find any code that would allow me to change each side of a cube’s texture, so I made them from scratch. Not sure if there is something out there to easily do this. I’ve heard of texture atlas’ and such, but every time I looked at it, it seemed confusing. People would always say something like “8x8 texture, with some buffer thing in the middle” yeah… >(???

I should be able to get away with creating a few squares though and loading them like I have been like so

                       Texture2D tex1 = new Texture2D();
                       AWTLoader loader1 = new AWTLoader();
                       Image load1 = loader1.load(i1, true);

Does loading this in make it an asset or…?

Thanks for the info.

Nice :slight_smile: . I have an I5 2400, but I think I’m going to upgrade, instead of try to use it.

So it really depends if I should get an R9 280/380 or a 290/390… Seems like x80 is a better bet for the money. One video I saw mentioned that the 3xx are just remakes of the 2xx with a higher core clock speed…

Idk… >(

So I found this interesting articles when searching about using Radeon with GeForce cards.

http://www.tomshardware.com/forum/337050-33-nvidia-cards-system-reduced-performance-both

People were saying that apparently you can use a Radeon card, and then connect up to the GeForce in order to use the “PhyX” for games that require/do-better with that…

The other thread basically said what @pspeed said, which was “as long as the drivers work together…”

I would assume it should be okay, but we will see…

I would assume I couldn’t do something like fry my cards… could I…???

Also… How would I select which GFX card I’m using in JME? How would you signifiy Dedicated vs Integrated as well?

Holy crap…

I found this thread

which talks about settings, and I never realized that JME might be selecting my Onboard graphics (but doubtful).

According to Momoko it could select either “Context,” and you would have to change settings. I see mine is on “auto select”

When I check out JME Application info I get this

Adapter: igdumd64
Driver Version: 8.15.10.2321
Vendor: NVIDIA Corporation
OpenGL Version: 4.3.0
Renderer: GeForce GT 550M/PCIe/SSE2
GLSL Ver: 4.30 NVIDIA via Cg compiler

What’s the adapter???

It seems it’s using my GPU to render, but is the Adapter correct???

Thanks :slight_smile:

EDIT: Holy crap I haven’t changed any of these settings!!!

For Anti-aliasing it has a billion options wtf… 16xq CSAA…? 32WTFBBQ? …???

I like this “Application-Controlled” business, basically JME makes the card it’s bitch.

I can’t believe I didn’t change these settings since my last reformat…

Damn, I’m such a noob… I don’t deserve to be here :cry:

I had two ATI Cards in my PC for quite a long time:

  • Radeon HD4960 and my current Radeon HD 6870

My experience was: Everything used the Card I plugged my monitor into. I used the other one for bitcoin mining (or sometimes both), so basically the only problem should be “having the right driver selected”.
This was on Windows, though. Could be a bit of config fun on Linux.

You could always have only one plugged and have a “stall” driver inside of your os which simply isn’t called (as you might have a FireWire Driver without a FW-Socket).

The Trick is to make JME choose the right one (currently it’s GeForce GT 550M)
The Only way to fry your card is when somewhat connecting anything wrong (Like SLI into a Crossfire card, or, well can’t really imagine).

I fried one by having the fan set to constant 50% whilst gaming. After replacing the Heat-paste and letting it cool down there were no errors anymore :stuck_out_tongue:

Currently I am not sure which one I’d buy, a R9 or a GTX, but it actually depends on your game so my bottleneck is really the CPU.

btw: Adapter seems to be the Driver/System-Thingy as connection/adapter :stuck_out_tongue:
And don’t use Integrated Graphics for Gaming, just don’t. I even know ppl buying Xeon Hardware-CPUs because they don’t come with Integrated GPUs.

@Edit: It’s easy. I don’t use Anti-Aliasing either during development. It simply beautifys edges (Remember in Photoshop or something, when your line isn’t exactly accurate, you see it going up or down one pixel?).
If you do that calculation 16 times it looks more smooth but consumes 16 times the cpu time :stuck_out_tongue: