How does jME3 achieve real-time rendering?

I’m new to both jME3 and Blender, and just learned about how to bake materials into textures inside Blender.

I’m wondering if jME3 supports and encourages baked materials, or if the engine requires/prefers to render its own materials from simple texture files.

There’s also something I can’t shake: in Blender, a single render (even using the more advanced Cycles renderer) can take up to 45 seconds on my machine. But obviously, in a jME3 game (or any game really), you can have amazing graphics, and so rendering is obviously happening continuously, multiple times a second in real-time. So I’m also wondering what the disconnect is, regarding how “slow” Blender’s renders seem to be, versus how jME3 achieves real-time (or near real-time) rendering.

Well the dimple reason is that blender is using a way higher quality for the entire way lighting is done.
Games just use simple approximations for most stuff, that look good enough. Then blender is also used for movies, like space scenes, with way more moving and animated objects than any realtime engine can handle.

So they have different approaches:
Make the best image possible for each frame vs Make the quickest to render image with a minimum quality that is acceptable.

You will see this with transparent objects sometimes, there is rarely any game engine that can properly handle multiple overlapping transparent objects, as this would require to resolve the order of the different transparent layers multiple times for each pixel.
Another huge point is lighting, blender can actually calculate light, with shadows as a result, in game engines, your lights ignore any objects, and shadows later on added as works within time budget. (Eg jagged edges, flickering ect all usual with game engines). So this leasds to your question, jme can use any textures no matter the source, so if you can save the baked texture, you can use it. This will allow you to have a way better quality lighing, and many older games used similar approaches extensivly. (Source engine does it a similar way with hours of preprocessing a whole map). The downside is that dynamic lighting, eg day and night are very difficult to merge with the precalculated values.
The source engine solved this by adding dynamically additional light to the static map, and also generating multiple version of a static map, to easily allow stuff like flickering lights. In fact it would in that case generate two baked texture for all affected parts of the light. This approach is simple but scales badly. It works fine for small levels and non dynamic lighting. But if you for example have a city GTA5 style, were some idiot could run over any latern at any time, you need a dynamic way.

So in short, you can use it, and it is a good choice if your lighting rarely changes. For mobile development it might even be the only sensible choice to get light with acceptable performance ina g ame.

Real time rendering is quick beacuse it sacrifices a lot of visual fidelity, for example it does not compute shadows in a realistic way, it does not follow the physical laws of light (like energy conservation) and so on. Basically there is a a lot of smoke and mirrors.
That said, the difference grows smaller every year and even if it didn’t, as long as it looks good enough the brain of the consumer fills in the details.

Edit: What Empire Phoenix said.