What is BitFormat for GLSL PixelFormat Luminance and LuminanceAlpha?

I need to send a 16-bit value to my GLSL pixel shader, and am expecting it to show up as a Normalized value between 0…1. But then I should be able to multiply it by 65535 to convert it back to the integer format that I desire.

I’m having bad luck with this. I’m not sure what the Pixel Buffer format should be for this Texture.

I’m setting Pixel Format to 6406 (Alpha), and 6410 (LuminanceAlpha), 6409 Luminance to try and get something to work.

But nothing seems to be working or even making sense.

For HLSL, I use format R16G16, and it works as expected. The texture “r” value is actually a 16-bit float, and the ‘g’ value is also a 16-bit float. I write my values to it as “ushort” (16-bit int’s), and then in the shader am able to convert them back to the ‘int’ value by multiplying by 65535. It works like a charm.

But for OpenGL - I can seem to get these other formats to make any sense. If I can’t get this working, I’ll have to resort to using RGBA format, and combining two-channels together like this:

int value = (color.r * 65280.0) + (color.g * 255.0);

For one shader we only need ONE 16-bit value, and would prefer to have the texture be 16bpp, not 32bpp, to save on RAM.

OpenGL - seems to be our problem-child.

I think I may have figured out PART of my own problem here. I think “LuminanceAlpha” is a 16-bpp texture, where it thinks there are 2 components – (rgb) and (a). Where Rgb channels are all derived from the Luminance. But in this case there are no 16-bit channels. All Channels are 8-bit.

Is there any better way to send a 16-bit value to a GLSL shader via a texture, aside from awkwardly combining them with a function like:

int value = (color.r * 65280.0) + (color.a * 255.0)


I don’t think GL ES 2 supports it.

1 Like

OK - so the best I can do, probably, is to select “LuminanceAlpha” as the pixel format, which means it uses a texture that is 16 bpp, with 2 channels, “rgb” and “a”. (where rgb are always the same value).

And I’ll have to use the contorted math to combine these two values to create the 16-bit int that I need.

OpenGL appears handicapped compared to DirectX. It’s a shame. (i.e. OpenGL also doesn’t support bitwise operations, which also sucks, because these make it easier to pack information into various bits) A shame really.

Well, GL ES standard is created to be compatible with the broadest scope of hardware.
So it is made as simple as possible. You may look for vendor extensions, but those are seldom used in Urho, only for most necessary things (e.g. depth texture support).

1 Like

I’m not certain, but I think regular (desktop) Open GL should support a GL_R16 format, or a GL_R16UI format if you want the integers themselves https://www.khronos.org/opengl/wiki/Image_Format

It looks like GLSL also supports bitwise operations, it just requires integers for them. https://community.khronos.org/t/bitwise-operators-in-glsl/70532

1 Like

SirNate0 - thanks! I may give bit-wise ops another try in the future. My first attempt failed miserably, despite me using Integers inside the shader. I believe it gave me compiler errors.

We only use OpenGL for Android/iOS, never Desktop. On the desktop, we’re using HLSL. HOWEVER, if we could make UrhoSharp use OpenGL in UWP (Universal Windows Platform), then we’d prefer this, so that we don’t have to keep writing two-versions of each shader.