Said it before. If you cannot implement some simple task without deep changes in the engine, it’s not really “lightweight”, it’s “missing features”. Or outright missing features, e.g. no networking in Web builds. I don’t mean Urho cannot be used. It totally can and Urho community prove it. But it would be lie to say that Urho don’t miss anything.
Also, only a person who never looked at Urho renderer code can call it “lightweight”. It’s complicated as hell. Ever looked at View.cpp/Batch.cpp?
Some things like the logic for pingponging postprocess buffers is definitely not clear or lightweight, instead it’s just very hardcoded and there’s an element of “magic” to it, which is not nice to have.
One thing which I pondered as a simpler replacement was having high-level helper operations (ie. collect visible objects, collect light interactions, render a collection of objects…) in the Renderer which could be used to make your custom render process in code, somewhat like Unity’s Scriptable Render Pipelines are today. Back when I worked on it, Turso3D went a bit into this direction, but didn’t get very far.
Since you are here, maybe you can share some thoughts?
There are a lot of issues (or so I think) in the renderer.
Going from top to bottom:
Hard to extend and configure render paths, need an instance for each permutation. E.g. if I want to choose between sRGB and RGB with just a flag, or I want optional GBuffer layer for Deferred, I need to clone RP. We have it now with e.g. Forward/ForwardDepth
There is no per-view-per-drawable state, essentially making things like smooth LOD transitions very hard to implement in multi-view use cases.
Drawables store a lot of temporary View-specific data used only in rendering.
Vertex lights fall back into pixel lights (lolwat) instead of spherical harmonics (cannot really blame Urho because there were no SH back then, but still an issue)
Hardcoded vertex and pixel shader permutations, missing automatic shader defines. E.g. I want shader to automatically get defines like PASS_LITBASE, TEXTURE_DIFFUSE or INPUT_NORMAL. If we had it, there would be no need to have 100 techniques for each permutation.
Almost zero caching of computations done in View/Batch.cpp, even if result is the same for consequent frames.
Uniform Buffers are used in the way opposite to how they are supposed to be used. No wonder they are slower on OpenGL and are disabled there.
Not really other ideas, but I thought of the default shaders (ie. LitSolid) being quite monstrous and that it would have been nicer to have shader per purpose, like DiffuseTextured, NormalMappedTextured and possibly get rid of Techniques altogether at same time. Shaders could have then definitions (either automatically or manually) what requirements they have for textures / vertex input streams.
How would you avoid copy-paste in such shaders?
Even if you extract everything into functions… Well, we already extracted pretty much everything into functions. And LitSolid is still quite huge. Shall we split it, there would be a lot of repeating lines in all variations.
It’s also not clear what configuration deserves separate shader and what configuration shall be just a define. How many variations should spawn from just LitSolid?
However, I see how shaders are split more as a content choice, the engine shouldn’t need to mandate it either way, just be able to configure whatever it wants out of the shader (like the light type or shadowing on/off). Ideally I always saw whatever Urho included just as an example shader, and the user writing their real shaders
Having deferred / forward embedded in same material or shader is also potentially cumbersome no matter how you look at it.