Don’t know if this fits into any category of the forum and which one that might be.
This is a similar topic created here about two years ago:
other post of zzuegg with top answers by normen, jmaasing, toolforger
And then I just saw some people discussing DX12:
other post discussing the new DirectX version
And there are early demos for Vulkan in this post:
other post showing early Vulkan demo video
I want to share a few worries about those (recently hyped) APIs that started with AMD Mantle and then forced the others to quickly come up with something similar (DirectX12, Vulkan, Metal).
The main idea is to give the coders more direct access to the ‘metal’ (i.e. the hardware structure that is present on the machine that you are currently working on) (i.e. a certain kind of today’s graphics acceleration hardware that you are tailor-making your 3D software for).
I’ve heard about a year ago, that Mantle really gave those big company coders and their AAA games some 20 percent or more fps for the first few prototypes - and that makes a real difference for the PR and some customers. Half a year ago an AMD boss said that people should use DirectX12 which was inspired by Mantle and which follows a similar concept (or Vulkan).
I’m not knowing much about this topic, but that’s what worries me:
- What about the hardware that is present in 10 years - will people be able to play a Mantle/Vulkan game or will it even be possible to write a fast emulator for your old down-to-metal code? When in 10 years Mantle and Vulkan are dead - who would write a driver for a dead branch of technology? Mantle seems dead already now (less than 2 years after having rocked the PR landscape).
- What about hardware that is 5 years old - it will probably not support the new APIs, but only OpenGL or older flavors of DirectX (btw: are the MS techies breaking their interface’s backwards-compatibility with the new DirectX again? It happened before…).
- What about the increasing diversity - not only in APIs (6 instead of 2), but also in the hardware (different PCs, notebooks, consoles, mobile devices - is this even managable and will Vulkan even survive (or OpenGL for that matter) when the pressure escalates.
- What about the manpower that you need before the market fracturing / momentary diversity solves itself due to that pressure - You either have to lock-in for one of 6+X varieties or have a team of 4 graphics experts instead of 1 or 2 OpenGL experienced junior devs - only big companies can afford such teams.
- What about the concept of high level coding - okay, so I’m a fan of high level coding and high-level ideas and other people aren’t because they like to use C and asm and SIMD based access in their shiny C++ engines for a known reason - but GLSL and Java are based on the idea of being as independent as possible to the underlying metal. An idea that has great benefits and sometimes sacrifices a good portion of the fps as a necessary side effect. (what you can not communicate via PR when PR only consists of “33 percent more more more fps fps fps!”)
- What about those three books on OpenGL 4.3 and 4.0 that I bought - should I even read those?
- What about the quality of the OpenGL drivers - they aren’t he best drivers already (most notably because AMD already had to switch human resources to DirectX and Mantle in the past and probably desires to discontinue OpenGL at all) - when there is no good driver not even the few CAD people that keep pushing OpenGL and might switch to other APIs - many already might have switched.
- will I be able to use OpenGL 4 or an ES variant on as many devices as possible or will many devices become de facto unaccessible to me and jME fans in the near future?
- And there was one more point … maybe I remember it later…
- Ah … here it is: I asked myself if the few percents of fps that you gain by coding close to the metal is worth the whole thing - as the hardware becomes more powerful over the years you might not need that highly efficient code in the near future, so high level coding is not such a performance problem after all.
- Even today it seems like you could render almost indefinite amounts of vertices and the performance seems okay, so what could computers in 5 years add to this experience?
I really hope that this is just a short hype and that most people will return to OpenGL in the future. Not only will I be able to keep my books and can continue learning OpenGL. But also will I get all the advantages that I figure right now (and avoid the disadvantages that become clear when you see my list of worries that I puzzled together with my tiny little mind).
Well, this was not supposed to become a rant and I hope you will see this as someone having worries about the current development and being afraid to lose his beloved 3D render API and render engine in the next few years (what would I do - become one of those unity people?) :chimpanzee_nogood:
About other concerns: technology moves on. It has always been like this. However, it’s getting better: we get emulators for older games/enviroments (see DosBOX) so you can have your nostalgia fix just in case…
Well lets start, crresponding to your unnumberd numberd points in order:
First of all Vulcan comes from the Khronos group, like opengl. I would see it as heavily api breaking opengl 6 in that regard.
Well you usually can play very old games today pretty fine, like DirectX 7 games, or opengl 1.2 ones. The former is due to Microsoft doing much work for backwards compability, the second is that opengl is open, and there are even full software systems for the older levels. (linux mesa drivers) So there is a good chance to emulate/ run them. Mantle is pretty dead, but if you compare the apis between mantle and vulcan, they mostly fixed the prefix of the api files.
Vulcan is like 80% mantle, so if you are activly developing, that change is manageable.
Older hardware does also not support current Opengl levels, so the result is the same, wheter you give the new api a new version number, or another name. Btw even for Glide there is a glide To OpenGL converter, so old stuff using Glide can still be played.
I kinda expect Metal to die pretty fast/ become a niche. For all multiplatform games/applications, it is just more work. Kinda similar to the PS3 Cell cpu, sure it was great, but rearly used by multiplatform games. Reducing the physic accuracy is way simpler and chaper
Vulcan is as open as OpenGl, so depending on your release minimum hardware you can be pretty save with just Opengl or Vulcan.
Actually current Opengl has way to many low leel concepts as well, just materialized into different assumed functionality. Aka the opengl versions. For example my gpu can run depending on the installed driver maximum of: 1.2, 2.0 or 4.0 level, with the same! hardware. Having the engines define more stuff themselves in a gpu friendly languages could actually greatly improve the longlivlyness of a game, as less commands to implement in the gpu driver mean less bugs and less problems. Kinda like the JVM has very few valid bytecode instructions on the Api level.
If you implement only these <200 functions you can write your own jvm Java bytecode instruction listings - Wikipedia
Compare this with this “small” Overview! of Opengl
Vulcan is actually pretty near the idea of a JVM, it has even a bytecode and a Vulcan Runtime.
Remember Opengl was created, when gpu’s usually consisted of a few kb ram for the framebuffer, and no processing on their own, not to mention that only singlecores existed. Many of the concepts that influenced the api design are by now as far away from the hardware as they can get. Vulcan is just nearer, but not a C like level.
Sure, the basic conepts apply as well, many logic is the same, just the api changes. Appart from that opengl will be a viable choice for many more years.
Vulcan hopefully is easyer to support, for the same reason that JVM’s exist for almost any platofrm.
If you get the offical Vulcan runtime to work against your gpu, you are mostly done. (I bet nvidia will develop their own with a few % more speed and not share it back)
May of the devices will later support Vulcan, on devices and on desktop it has the same api, wich simplifies things a lot. Only one unified Vulcan renderer is necessary. Since they all want to earn money, opengl espeically on android will still be there as long as games are made with in in sufficient numbers. Also there is a good chance JME willsupport Vulcan when the time comes.
7.1) It is way more than a few, and a lot of usefull other stuff apart from fps. Like being able to truly load textures in background using a second command buffer. (Yeah i know shared opengl contexts, but they are way more overhead work for the same result, and a constant fps penalty due to multithreading locks)
7.2) I find todas hardware still very limiting, I could use easily way more performance. Every minute spend on creating lod handling is a minute not spend on my game’s logic.
I really hope that this will not be killed by people like you, and most people will continue to push forward in the future. Not only did I rearly used books to lean, I can continue to create better games. But also I will I get all the advantages I figure right now ( and avoid being tangled to a greatly fucked up aPI, that crumbles since years)
Plus, if the core is solid then higher level APIs will naturally come on top of it.
Thanks for your profound regards on that matter, Empire_Phoenix. I had hoped that someone with a bit more knowledge about that topic could relativize most of my worries.
I’m not fully convinced yet about what worries me most (that I will have to handle a zoo of APIs in the future, even worse than coding branches for OpenGL 2, 3 and 4). At the moment I’m not yet at the stage where I need to implement shaders but I hope to return to graphics coding in a month or so - so it’s not an urgent matter yet.
Well, anyway, the future will come one way or the other and so it’s probably “que sera sera” or “he who does not move with the times, will vanish over time”…
pspeed is probably right too. Maybe the first versions will behave similar to the old shader language of OpenGL which was similar to assembler and after a while, a GLSL-like system will follow which creates the byte code that Empire-Phoenix mentioned via compiler and maybe we even can automatically transform GLSL to “Vulkan SL” or “Vulkan SL” to GLSL via some cool translation software (hopefully free software).
And Empire_Phoenix - you are right - that .pdf chart looks monstrous at first glance. It’s a nice link though which I will probably use during graphics coding.
I don’t know why I like OpenGL and GLSL so much and will miss them when they’re gone. Maybe it’s the old memories (so yes, some kind of nostalgia). When I was coding some OpenGL (which is 8 years ago now) I easily found everything I needed by only using a red book and some official .pdf manuals. Sadly I had to deal with these assembler-like shaders that were still around those days and that another coder had left as a heritage for me. I was very happy to have a high level shading language those days and only encountered few oddities (some things were different on ATI / NVIDIA - yes they were still called ATI those days).
So, maybe I will embrace Vulkan once it’s reached a mature form. And I hope that there will be a driver for older hardware so that my shiny Vulkan code will run on 5 or 10 year old hardware - or the GLSL-to-VulkanSL translation software mentioned earlier. Otherwise I will spend much time talking two different systems or will be just one of those old grandpas who still use OpenGL. At least I’m sure that I will want to support older and weaker hardware somehow.
The glsl to vulcan compiler is the first supported language to generate the bytecode. And others are on a might follow basis.
Look at page 13.
Or in short, glsl will stay for quite a while.
Oh yeah it also has the ability to dynamically work similar to opencl instead of opengl if the task is more suited to this. So kinda like the old java promise of the java to opencl compilr that never made it.
Also take a look at page 17, it explains roughly why opengl is inferor.