I am not an OpenGL guru and I'm sure someone who is a guru will protest loudly and rudely in the comments below about a something that's wrong here at some point but ... I effectively wrote an OpenGL ES 2.0 driver for Chrome. During that time I learned a bunch of trivia about OpenGL that I think is probably not common knowledge.
Until OpenGL 3.1 you didn't need to call glGenBuffers
, glGenTextures
, glGenRenderbuffer
, glGenFramebuffers
You still don't need to call them if you're using the compatibility profile.
The spec effectively said that all the glGenXXX
functions do is manage numbers for you but it was
perfectly fine to make up your own numbers
const id = 123; glBindBuffer(GL_ARRAY_BUFFER, id); glBufferData(GL_ARRAY_BUFFER, sizeof(data), data, GL_STATIC_DRAW);
I found this out when running the OpenGL ES 2.0 conformance tests against the implementation in Chrome as they test for it.
Note: I am not suggesting you should not call glGenXXX!. I'm just pointing out the triva that they don't/didn't need to be called.
Texture 0 is the default texture.
You can set it the same as any other texture
glBindTexture(GL_TEXTURE_2D, 0); glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, 1, 1, 0, GL_RGBA, GL_UNSIGNED_BYTE, oneRedPixel);
Now if you happen to use the default texture it will be red.
I found this out as well when running the OpenGL ES 2.0 conformance tests against the implementation in Chrome as they test for it. It was also a little bit of a disappointment to me that WebGL didn't ship with this feature. I brought it up with the committee when I discovered it but I think people just wanted to ship rather than go back and revisit the spec to make it compatible with OpenGL and OpenGL ES. Especially since this trivia seems not well known and therefore rarely used.
Compiling a shader is not required to fail even if there are errors in your shader.
The spec, at least the ES spec, says that glCompileShader
can always return success.
The spec only requires that glLinkProgram
fail if the shaders are bad.
I found this out as well when running the OpenGL ES 2.0 conformance tests against the implementation in Chrome as they test for it.
This trivia is unlikely to ever matter to you unless you're on some low memory embedded device.
There were no OpenGL conformance tests until 2012-ish
I don't know the actual date but when I was using the OpenGL ES 2.0 conformance tests they were being back ported to OpenGL because there had never been an official set of tests. This is one reason there are so many issues with various OpenGL implementations or at least were in the past. Tests now exist but of course any edge case they miss is almost guaranteed to show inconsistencies across implementations.
This is also a lesson I learned. If you don't have comprehensive conformance tests for your standards, implementations will diverge. Making them comprehensive is hard but if you don't want your standard to devolve into lots of non-standard edge cases then you need to invest the time to make comprehensive conformance tests and do you best to make them easily usable with implementations other than your own. Not just for APIs, file formats are another place comprehensive conformance tests would likely help to keep the non-standard variations at a minimum.
Here are the WebGL2 tests as examples and here are the OpenGL tests. The OpenGL ones were not made public until 2017, 25yrs after OpenGL shipped.
Whether or not fragments get clipped by the viewport is implementation specific
This may or may not be fixed in the spec but it is not fixed in actual implementations.
Originally the viewport setting set by glViewport
only clipped vertices (and or the triangles they create).
but for example, draw a 32x32 size POINTS
point say 2 pixels off the edge of the viewport, should the
14 pixels still in the viewport be drawn? NVidia says yes, AMD says no. The OpenGL ES spec says yes, the
OpenGL spec says no.
Arguably the answer should be yes otherwise POINTS
are entirely useless for any size other than 1.0
POINTS
have a max size. That size can be 1.0.
I don't think it's trivia really but it might be. Plenty of projects might use POINTS
for
particles and they expand the size based on the distance from the camera but it turns out they
may never expand or they might be limited to some size like 64x64.
I find this very strange that there is a limit. I can imagine there is/was dedicated hardware to draw points in the past. It's relatively trivial to implemented them yourself using instanced drawing and some trivial math in the vertex shader that has no size limit so I'm surprised that GPUs just don't use that method and not have a size limit.
But whatever, it's how it is. Basically you should not use POINTS
if you want consistent behavior.
LINES
have a max thickness of 1.0 in core OpenGL
Older OpenGL and therefore the compatibility profile of OpenGL supports lines of various thicknesses although like points above the max thickness was driver/GPU dependant and allowed to be just 1.0. But, in the core spec as of OpenGL 3.0 only 1.0 is allowed period.
The funny thing is the spec still explains how glLineWidth
works. It's only buried
in the appendix that it doesn't actually work.
E.2.1 Deprecated But Still Supported Features
The following features are deprecated, but still present in the core profile. They may be removed from a future version of OpenGL, and are removed in a forward compatible context implementing the core profile.
- Wide lines - LineWidth values greater than 1.0 will generate an
INVALID_VALUE
error.
The point is, except for maybe debugging you probably don't want to use LINES
and instead you need to rasterize lines yourself using triangles.
You don't need to setup any attributes or buffers to render.
This comes up from needing to make the smallest repos either to post on stack overflow or to file a bug. Let's assume you're using core OpenGL or OpenGL ES 2.0+ so that you're required to write shaders. Here's the simplest code to test a texture
const GLchar* vsrc = R"(#version 300 void main() { gl_Position = vec4(0, 0, 0, 1); gl_PointSize = 100.0; })"; const GLchar* fsrc = R"(#version 300 precision highp float; uniform sampler2D tex; out vec4 color; void main() { color = texture(tex, gl_PointCoord); })"; GLuint prg = someUtilToCompileShadersAndLinkToProgram(vsrc, fsrc); glUseProgram(prg); // this block only needed in GL, not GL ES { glEnable(GL_PROGRAM_POINT_SIZE); GLuint vertex_array; glGenVertexArrays(1, &vertex_array); glBindVertexArray(vertex_array); } const GLubyte oneRedPixel[] = { 0xFF, 0x00, 0x00, 0xFF }; glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, 1, 1, 0, GL_RGBA, GL_UNSIGNED_BYTE, oneRedPixel); glDrawArrays(GL_POINTS, 0, 1);
Note: no attributes, no buffers, and I can test things about textures. If I wanted to try multiple things I can just change the vertex shader to
const GLchar* vsrc = R"(#version 300 layout(location = 0) in vec4 position; void main() { gl_Position = position; gl_PointSize = 100.0; })";
And then use glVertexAttrib
to change the position. Example
glVertexAttrib2f(0, -0.5, 0); // draw on left glDrawArrays(GL_POINTS, 0, 1); ... glVertexAttrib2f(0, 0.5, 0); // draw on right glDrawArrays(GL_POINTS, 0, 1);
Note that even if we used this second shader and didn't call glVertexAttrib
we'd get a point in the center of the viewport. See next item.
PS: This may only work in the core profile.
The default attribute value is 0, 0, 0, 1
I see this all the time. Someone declares a position attribute as vec3
and then manually sets w
to 1.
in vec3 position; uniform mat4 matrix; void main() { gl_Position = matrix * vec4(position, 1); }
The thing is for attributes w
defaults to 1.0 so this will work just as well
in vec4 position; uniform mat4 matrix; void main() { gl_Position = matrix * position; }
It doesn't matter that you're only supplying x, y, and z from your attributes.
w
defaults to 1.
Framebuffers are cheap and you should create more of them rather than modify them.
I'm not sure if this is well known or not. It partly falls out from understanding the API.
A framebuffer is a tiny thing that just consists of a collection of references to textures and renderbuffers. Therefore don't be afraid to make more.
Let's say your doing some multipass post processing where you swap inputs and outputs.
texture A as uniform input => pass 1 shader => texture B attached to framebuffer texture B as uniform input => pass 2 shader => texture A attached to framebuffer texture A as uniform input => pass 3 shader => texture B attached to framebuffer texture B as uniform input => pass 4 shader => texture A attached to framebuffer ...
You can implement this in 2 ways
Make one framebuffer, call
gl.framebufferTexture2D
to set which texture to render to between passes.Make 2 framebuffers, attach texture A to one and texture B to the other. Bind the other framebuffer between passes.
Method 2 is better. Every time you change the settings inside a framebuffer the driver potentially has to check a bunch of stuff at render time. Don't change anything and nothing has to be checked again.
This arguably includes glDrawBuffers
which is also framebuffer state. If you need
multiple settings for glDrawBuffers
make a different framebuffer with the same attachments
but different glDrawBuffers
settings.
Arguably this is likely a trivial optimization. The more important point is framebuffers themselves are cheap.
TexImage2D the API leads to interesting complications
Not too many people seem to be aware of the implications of TexImage2D. Consider that
in order to function on the GPU your texture must be setup with the correct number of mip
levels. You can set how many. It could be 1 mip. It could be a a bunch but they each
have to be the correct size and format. Let's say you have a 8x8 texture and you want to do the
standard thing (not setting any other texture or sampler parameters). You'll also need
a 4x4 mip, a 2x2 mip, an 1x1 mip. You can get those automatically by uploading the
level 0 8x8 mip and calling glGenerateMipmap
.
Those 4 mip levels need to copied to the GPU, ideally without wasting a lot of memory. But look at the API. There's nothing in that says I can't do this
glTexImage2D(GL_TEXTURE_2D, 0, 8, 8, 0, GL_RGBA, GL_UNSIGNED_BYTE, pixelData8x8); glTexImage2D(GL_TEXTURE_2D, 1, 20, 40, 0, GL_RGBA, GL_UNSIGNED_BYTE, pixelData40x20); glTexImage2D(GL_TEXTURE_2D, 2, 10, 20, 0, GL_RGBA, GL_UNSIGNED_BYTE, pixelData20x10); glTexImage2D(GL_TEXTURE_2D, 3, 5, 10, 0, GL_RGBA, GL_UNSIGNED_BYTE, pixelData10x5); glTexImage2D(GL_TEXTURE_2D, 4, 2, 5, 0, GL_RGBA, GL_UNSIGNED_BYTE, pixelData5x2); glTexImage2D(GL_TEXTURE_2D, 5, 1, 2, 0, GL_RGBA, GL_UNSIGNED_BYTE, pixelData2x1); glTexImage2D(GL_TEXTURE_2D, 6, 1, 1, 0, GL_RGBA, GL_UNSIGNED_BYTE, pixelData1x1);
If it's not clear what that code does a normal mipmap looks like this
but the mip chain above looks like this
Now, the texture above will not render but the code is valid, no errors, and, I can fix it by adding this line at the bottom
glTexImage2D(GL_TEXTURE_2D, 0, 40, 80, 0, GL_RGBA, GL_UNSIGNED_BYTE, pixelData80x40);
I can even do this
glTexImage2D(GL_TEXTURE_2D, 6, 1000, 1000, 0, GL_RGBA, GL_UNSIGNED_BYTE, pixelData1000x1000); glTexImage2D(GL_TEXTURE_2D, 6, 1, 1, 0, GL_RGBA, GL_UNSIGNED_BYTE, pixelData1x1);
or this
glTexImage2D(GL_TEXTURE_2D, 6, 1000, 1000, 0, GL_RGBA, GL_UNSIGNED_BYTE, pixelData1000x1000); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAX_LEVEL, 5);
Do you see the issue? The API can't actually know anything about what you're trying to do until you actually draw. All the data you send to each mip just has to sit around until you call draw because there's no way for the API to know beforehand what state all the mips will be until you finally decide to draw with that texture. Maybe you supply the last mip first. Maybe you supply different internal formats to every mip and then fix them all later.
Ideally you'd specify the level 0 mip and then it would be an error to specify any other mip that does not match. Same internal format, correct size for the current level 0. That still might not be perfect because on changing level 0 all the mips might be the wrong size or format but it could be that changing level 0 to a different size invalidates all the other mip levels.
This is specifically why TexStorage2D
was added but TexImage2D
is pretty much ingrained
at this point