Improving and discussing Urho3D's PBR

EDIT(removed the request to move discussion from another topic here so this won’t get lost):
Discussion about PBR:

Agreed. Nope, I cannot, I’m just a regular user. Modanung, can you please clean up project shots topic a bit? It’s really getting off-topic. Thanks Wei Tjong.

1 Like

Too bad you can’t really use PBR for real-time aside from just simple renders.

What do you mean?.. In Unity PBR is literally default shader, and thousands of games use it.
Any (or almost any) 3D AAA game for last 5+ years is using PBR is one form or another.

1 Like

@Eugene I meant Urho3d not other engines.

@GodMan and why is that? why isn’t usable in real time?

From everything I have read on the forums. If you tried to make a game using PBR that it was to slow.

Is it possible to discuss on other topics than ‘Random project shots’?

Edit by weitjong: Moved here as per your request.

2 Likes

@GodMan regarding PBR, on IBL cube textures turn off anisotropy (Urho’s default is 4) and set the filter mode to linear.

Other costs include:

  • Doesn’t use a split-sum LUT-texture which is cheaper on PC than the approximation used (which is meant for mobile where texture reads hurt much more than ALU). It doesn’t use one because the BRDF isn’t set in stone (see final word)
  • Redundancies in calculations (roughness * roughness over and over again, once you settle pack your data once into a struct and pass that around instead) depending which BRDF parts you’re using
  • poor format support in Urho3D (RGBA16_F should be used for IBL cubes or use RGBM8), hardware sRGB has spotty performance from GPU to GPU (even of the same vendors)
  • shader IBL defaults expect an unreasonably large texture (if you’re using more 256x256x6 for an IBL cube you’re probably a moron).
  • Legacy lighting systems incur significant costs due to repeated setup math compared to tiled or clustered shading, there’s no debate on whether one nDotV or 64 nDotV’s is cheaper.
  • That Urho3D default of 4x anistropic filtering hurts (mostly because of all of the above), seriously consider DDS and pre-generating your mips instead for your usage

PBR is a base for an environment where no one can agree on even the tiniest thing about how it should be, tweak it to be what you need it to be. Lock down how you want to do it, then tweak and it should be comparable to everything else. Sure you could have it fast out of the box, but then your luck you’d be the guy that can’t stand having to deal with DDS/KTX and the delay while your new textures are being Toksvig’ed.

3 Likes

@JSandusky so basically it is implemented poorly.

I’m pretty sure rgba16f textures are supported in Urho3d. Am I mistaken, or misunderstanding what you mean?

@JSandusky
After looking up some stuff I think I figured out what you meant. You mean that Urho can’t save to any file format that saves to something with that sort of precision, correct?

As far as I remember, Urho cannot load from any floating-point format either.
This is so annoying. I cannot even properly store baked lightmaps, I have to use RGBA8.

Would adding OpenEXR support alleviate the problem? I haven’t looked in detail but it looks like it would be pretty straightforward and I believe the license would be compatible.

Not in the slightest, sadly.
Format is not an issue, DDS is perfectly capable of handling basically any uncompressed (and some compressed) image formats.

The issue is that Urho handles all images via Image class, which is hardcoded to RBG(A)8 layout.
Exception are compressed formats, but they are read-only.

The only realistic option (=that doesn’t involve screwing the interface of Image) I see is to add artificial “compressed” formats for RGBA16 and so on.
And teach SaveDDS how to save compressed formats.

Also BMP should be able to save any uncompressed format too. So the issue is entirely with the image class? Perhaps one could overhaul the image class? That probably entails way more than it initially sounds. Why is the image class locked to 8 bit precision anyway? Is this an issue with the engine being as old as it is?

Frankly I think the best option would be to overhaul the Image class. Weird workarounds are great for a short term solution but as an open source project we should probably think about longevity. Using a workaround instead of fixing the problem itself just doesn’t sound like the best option in the long run.

Compressed formats are good because Image already offers limited support for it.
For “normal” images Image offers a lot of utility functions that have to be refactored or extended.

These ones will require separate implementation for each image component type:

  • Image::SetPixelInt
  • Image::Resize
  • Image::ClearInt
  • Image::GetPixel
  • Image::GetPixelInt
  • Image::GetNextLevel
  • Image::ConvertToRGBA

And I’m not really sure how to make some functions work for generic pixel type.
I mean, how GetPixelInt is even supposed to work for float texture? Probably we want error message in these cases.

2 Likes

Not having looked at where GetPixelInt and similar are used, my intuition is that it should actually return the same thing that the function Color::ToUInt() even if it’s a floating point texture just as GetPixel returns a floating point representation of the color.

We would also need to add the necessary formats to the Graphics class. I think my preference would be to add an argument to the functions:

unsigned GetRGBAFormat();

would become

enum GRAPHICS_NUMBER_FORMAT {
    GNF_INT8=0,
    GNF_FLOAT16,
    GNF_FLOAT32
}

...

unsigned GetRGBAFormat(GRAPHICS_NUMBER_FORMAT fmt = GNF_INT8);

so that the switch statements on the number of components can be updated just by adding an image->GetNumberFormat() or the like to the calls. Though that switch statement to get the graphics format for the image could also be moved from the 4 texture kind implementations to the Texture or Image or Graphics classes.

3 Likes