Search

Categories

WebGL Image Processing

Image processing is easy in WebGL. How easy? Read below.

This is a continuation from WebGL Fundamentals. If you haven’t read that I’d suggest going there first.

To draw images in WebGL we need to use textures. Similarly to the way WebGL expects clipspace coordinates when rendering instead of pixels, WebGL expects texture coordinates when reading a texture. Texture coordinates go from 0.0 to 1.0 no matter the dimensions of the texture.

Since we are only drawing a single rectangle (well, 2 triangles) we need to tell WebGL which place in the texture each point in the rectangle corresponds to. We’ll pass this information from the vertex shader to the fragment shader using a special kind of variable called a ‘varying’. It’s called a varying because it varies. WebGL will interpolate the values we provide in the vertex shader as it draws each pixel using the fragment shader.

Using the vertex shader from the end of previous post we need to add an attribute to pass in texture coordinates and then pass those on to the fragment shader.

attribute vec2 a_texCoord;
...
varying vec2 v_texCoord;

void main() {
   ...
   // pass the texCoord to the fragment shader
   // The GPU will interpolate this value between points
   v_texCoord = a_texCoord;
}

Then we supply a fragment shader to look up colors from the texture.

<script id="2d-fragment-shader" type="x-shader/x-fragment">
precision mediump float;

// our texture
uniform sampler2D u_image;

// the texCoords passed in from the vertex shader.
varying vec2 v_texCoord;

void main() {
   // Look up a color from the texture.
   gl_FragColor = texture2D(u_image, v_texCoord);
}
</script>

Finally we need to load an image, create a texture and copy the image into the texture. Because we are in a browser images load asynchronously so we have to re-arrange our code a little to wait for the texture to load. Once it loads we’ll draw it.

function main() {
  var image = new Image();
  image.src = "http://someimage/on/our/server";  // MUST BE SAME DOMAIN!!!
  image.onload = function() {
    render(image);
  }
}

function render(image) {
  ...
  // all the code we had before.
  ...
  // look up where the texture coordinates need to go.
  var texCoordLocation = gl.getAttribLocation(program, "a_texCoord");

  // provide texture coordinates for the rectangle.
  var texCoordBuffer = gl.createBuffer();
  gl.bindBuffer(gl.ARRAY_BUFFER, texCoordBuffer);
  gl.bufferData(gl.ARRAY_BUFFER, new Float32Array([
      0.0,  0.0,
      1.0,  0.0,
      0.0,  1.0,
      0.0,  1.0,
      1.0,  0.0,
      1.0,  1.0]), gl.STATIC_DRAW);
  gl.enableVertexAttribArray(texCoordLocation);
  gl.vertexAttribPointer(texCoordLocation, 2, gl.FLOAT, false, 0, 0);

  // Create a texture.
  var texture = gl.createTexture();
  gl.bindTexture(gl.TEXTURE_2D, texture);

  // Set the parameters so we can render any size image.
  gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_S, gl.CLAMP_TO_EDGE);
  gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_T, gl.CLAMP_TO_EDGE);
  gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MIN_FILTER, gl.NEAREST);
  gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MAG_FILTER, gl.NEAREST);

  // Upload the image into the texture.
  gl.texImage2D(gl.TEXTURE_2D, 0, gl.RGBA, gl.RGBA, gl.UNSIGNED_BYTE, image);
  ...
}

And here’s the image rendered in WebGL.


click here to open in a separate window

Not too exciting so let’s manipulate that image. How about just swapping red and blue?

   ...
   gl_FragColor = texture2D(u_image, v_texCoord).bgra;
   ...

And now red and blue are swapped.


click here to open in a separate window

What if we want to do image processing that actually looks at other pixels? Since WebGL references textures in texture coordinates which go from 0.0 to 1.0 then we can calculate how much to move for 1 pixel with the simple math onePixel = 1.0 / textureSize.

Here’s a fragment shader that averages the left and right pixels of each pixel in the texture.

<script id="2d-fragment-shader" type="x-shader/x-fragment">
precision mediump float;

// our texture
uniform sampler2D u_image;
uniform vec2 u_textureSize;

// the texCoords passed in from the vertex shader.
varying vec2 v_texCoord;

void main() {
   // compute 1 pixel in texture coordinates.
   vec2 onePixel = vec2(1.0, 1.0) / u_textureSize;

   // average the left, middle, and right pixels.
   gl_FragColor = (
       texture2D(u_image, v_texCoord) +
       texture2D(u_image, v_texCoord + vec2(onePixel.x, 0.0)) +
       texture2D(u_image, v_texCoord + vec2(-onePixel.x, 0.0))) / 3.0;
}
</script>

We then need to pass in the size of the texture from JavaScript.

  ...
  var textureSizeLocation = gl.getUniformLocation(program, "u_textureSize");
  ...
  // set the size of the image
  gl.uniform2f(textureSizeLocation, image.width, image.height);
  ...

Compare to the un-blurred image above.


click here to open in a separate window

Now that we know how to reference other pixels let’s use a convolution kernel to do a bunch of common image processing. In this case we’ll use a 3×3 kernel. A convolution kernel is just a 3×3 matrix where each entry in the matrix represents how much to multiply the 8 pixels around the pixel we are rendering. We then divide the result by the weight of the kernel (the sum of all values in the kernel) or 1.0, which ever is greater. Here’s a pretty good article on it. And here’s another article showing some actual code if you were to write this by hand in C++.

In our case we’re going to do that work in the shader so here’s the new fragment shader.

<script id="2d-fragment-shader" type="x-shader/x-fragment">
precision mediump float;

// our texture
uniform sampler2D u_image;
uniform vec2 u_textureSize;
uniform float u_kernel[9];

// the texCoords passed in from the vertex shader.
varying vec2 v_texCoord;

void main() {
   vec2 onePixel = vec2(1.0, 1.0) / u_textureSize;
   vec4 colorSum =
     texture2D(u_image, v_texCoord + onePixel * vec2(-1, -1)) * u_kernel[0] +
     texture2D(u_image, v_texCoord + onePixel * vec2( 0, -1)) * u_kernel[1] +
     texture2D(u_image, v_texCoord + onePixel * vec2( 1, -1)) * u_kernel[2] +
     texture2D(u_image, v_texCoord + onePixel * vec2(-1,  0)) * u_kernel[3] +
     texture2D(u_image, v_texCoord + onePixel * vec2( 0,  0)) * u_kernel[4] +
     texture2D(u_image, v_texCoord + onePixel * vec2( 1,  0)) * u_kernel[5] +
     texture2D(u_image, v_texCoord + onePixel * vec2(-1,  1)) * u_kernel[6] +
     texture2D(u_image, v_texCoord + onePixel * vec2( 0,  1)) * u_kernel[7] +
     texture2D(u_image, v_texCoord + onePixel * vec2( 1,  1)) * u_kernel[8] ;
   float kernelWeight =
     u_kernel[0] +
     u_kernel[1] +
     u_kernel[2] +
     u_kernel[3] +
     u_kernel[4] +
     u_kernel[5] +
     u_kernel[6] +
     u_kernel[7] +
     u_kernel[8] ;

   if (kernelWeight <= 0.0) {
     kernelWeight = 1.0;
   }

   // Divide the sum by the weight but just use rgb
   // we'll set alpha to 1.0
   gl_FragColor = vec4((colorSum / kernelWeight).rgb, 1.0);
}
</script>

In JavaScript we need to supply a convolution kernel.

  ...
  var kernelLocation = gl.getUniformLocation(program, "u_kernel[0]");
  ...
  var edgeDetectKernel = [
      -1, -1, -1,
      -1,  8, -1,
      -1, -1, -1
  ];
  gl.uniform1fv(kernelLocation, edgeDetectKernel);
  ...

And voila... Use the drop down list to select different kernels.


click here to open in a separate window

I hope this article has convinced you image processing in WebGL is pretty simple. Next up I'll go over how to apply more than one effect to the image.

'u_image' is never set. How does that work?

Uniforms default to 0 so u_image defaults to using texture unit 0. Texture unit 0 is also the default active texture so calling bindTexture will bind the texture to texture unit 0

WebGL has an array of texture units. Which texture unit each sampler uniform references is set by looking up the location of that sampler uniform and then setting the index of the texture unit you want it to reference.

For example:

var textureUnitIndex = 6; // use texture unit 6.
var u_imageLoc = gl.getUniformLocation(
    program, "u_image");
gl.uniform1i(u_imageLoc, textureUnitIndex);

To set textures on different units you call gl.activeTexture and then bind the texture you want on that unit. Example

// Bind someTexture to texture unit 6.
gl.activeTexture(gl.TEXTURE6);
gl.bindTexture(gl.TEXTURE_2D, someTexture);

This works too

var textureUnitIndex = 6; // use texture unit 6.
// Bind someTexture to texture unit 6.
gl.activeTexture(gl.TEXTURE0 + textureUnitIndex);
gl.bindTexture(gl.TEXTURE_2D, someTexture);
What's with the a_, u_, and v_ prefixes in from of variables in GLSL?

That's just a naming convention. a_ for attributes which is the data provided by buffers. u_ for uniforms which are inputs to the shaders, v_ for varyings which are values passed from a vertex shader to a fragment shader and interpolated (or varied) between the vertices for each pixel drawn.

  • Pieru

    Thanks for taking the time for making these tutorials and as i am fairly new to OpenGL and WebGL, this is helping me a lot with progressing with my project. In this respect, i am encountering an issue with reading pixels of an image loaded on a canvas and i am not understanding why i am getting these values. So, for the experiment, i have an image (png) which i am loading on a canvas with an HTML5 2D context. I use getImageData to retrieve the pixels and i notice that their values are off by 1 wrt to the real value of the pixels (as viewed in Photoshop). So, i thought to run the same experiment in a WebGL canvas based on your above imlementation and when i try to read the pixels with getImageData the pixels values are off by a lot. For instance, the Red component is 44 when in reality it should be 132. Would you know where i am going wrong and more importantly is it possible to extract the correct pixel values via a WebGL context?

    Thanks in advance

  • http://greggman.com greggman

    Canvas and most browsers use pre-multiplied alpha values. PNG files correctly use un-premultiplied alpha but when the browser loads the image it pre-multiplies the alpha.

    In other words, if the value in PNG file is 255,255,255,128 that 128 represents an alpha of 0.5 and most browsers will change that to 127,127,127,128 (the multiply the RGB part by the alpha).

    You can tell the browser not to do this in WebGL by calling

    gl.pixelStorei(gl.UNPACK_PREMULTIPLY_ALPHA_WEBGL, false);

    Browsers also some times apply color space conversions. In WebGL you can turn that off with

    gl.pixelStorei(gl.UNPACK_COLORSPACE_CONVERSION_WEBGL, false);

    You need to call both of those functions before you upload your texture.

    After that you might get the values you’re looking for….but, I’d try calling gl.readPixels to read the values.

  • Pieru

    Hi Greg,

    Thanks for the tip however, this does not work. So, i set the pixelStorei flags just before the ‘// setup GLSL program’ comment in your code. And then i used the readPixels method as you recommended. Here is the function that i call:

    
    function GetPixelValue() {
     	var x = parseInt(document.getElementById("X").value, 10);
    	var y = parseInt(document.getElementById("Y").value, 10);
    	var rgba = new Array();
    	var pixelValues = new Uint8Array(4);
    	gl.readPixels(x, y, 1, 1, gl.RGBA, gl.UNSIGNED_BYTE, pixelValues);
    	rgba[0] = pixelValues[0];
    	rgba[1] = pixelValues[1];
    	rgba[2] = pixelValues[2];
    	rgba[3] = pixelValues[3];
    	alert("(" + x + "," + y + ") - [" + rgba + "]");
    }
    
    

    When i run the GetPixelValue() i get for instance (56, 85, 99, 255) for a certain pixel from the canvas while in Photoshop that same pixel is equal to (188, 205, 236, 255).

    Would you have an idea as to why this is like that? (I am using FF 17.0.1) for this project. Thanks in advance.

  • http://greggman.com greggman

    Try putting the calls to pixelStorei at the beginning of your program just after you call getContext.

  • Pieru

    Hi Greg,

    Sadly configuring the pixelStorei right after getting the context does not help either. The values are exactly the same as the one above. I modified your the create3DContext function in your webgl-utils.js like this:

    var create3DContext = function(canvas, opt_attribs) {
    var names = ["webgl", "experimental-webgl"];
    var context = null;
    for (var ii = 0; ii < names.length; ++ii) {
    try {
    context = canvas.getContext(names[ii], opt_attribs);
    } catch(e) {}
    if (context) {
    context.pixelStorei(context.UNPACK_PREMULTIPLY_ALPHA_WEBGL, false);
    context.pixelStorei(context.UNPACK_COLORSPACE_CONVERSION_WEBGL, false);
    break;
    }
    }
    return context;
    }

    Would you have any other ideas? Otherwise, can you comment if it is really possible to get the proper pixel values?

    Thanks in advance.

  • http://greggman.com greggman

    Yes, you should be able to get the correct pixel values. You can see the official WebGL tests for some of these issues here https://www.khronos.org/registry/webgl/sdk/tests/conformance/textures/gl-teximage.html

  • Pieru

    Hi Greg,

    I have good news. When looking for pixels at coord (0, 0) (i.e. canvas coordinate system), one must actually the “flip” along the y axis again (same reason you do it in the vertex shader as in “clipspace” coord origin is at the bottom left unlike in the canvas coord system)..

    So, if my image is 640×480 and i am interested in pixel (0,0), then i must query for (0, 480). Like wise, if i am interested in the pixel (640, 480), then i should query for pixel (640, 0).

    Thanks again for your help. i really appreciate.

  • http://twitter.com/dahnielson Anders Dahnielson

    precision float mediump;

    …should be…

    precision mediump float;

  • http://greggman.com greggman

    thank you

  • Winchestro

    I think I’m missing something fundamental here. How does uniform sampler2D u_image; manage to get into the fragment shader? This is the only place where something called “u_image” is mentioned in your entire code.

  • Winchestro

    oh nevermind, I figured it out. It’s a shader, of course it HAS to be unintuitive.^^ Anyway great tutorial, sir. You are ungodly smart.

  • http://greggman.com greggman

    No you’re right. It’s not explained. Sorry

    There’s a bunch of defaults. Uniforms default to ‘0’ so u_image defaults to using texture unit 0. Texture unit 0 is also the default when starting WebGL so there’s no reason to call `gl.activeTexture()` to select a texture unit. So I skipped both of those steps. I should add a comment about that.

  • Winchestro

    thanks for taking the time to explain it!

  • Obi Wan Kenobi

    Hello, thanks for the tutorial, I’m new o WebGl and it helped me a lot! I was wondering, what should I do if I want the texture to have a stretch-to-fill behaviour? Thanks :)

  • http://greggman.com greggman

    In the sample above you’d just change the line

    setRectangle(gl, 0, 0, image.width, image.height);

    To whatever dimensions you want.

  • Obi Wan Kenobi

    Thank you, I realized soon after asking the question that it was fairly obvious, sorry for taking your time. Guess I should stop programming at 3 a.m. !

  • Diego

    Thanks for this tuto, at last I understand webgl.
    It worked ok with colors but now with texture I get :
    *** Error compiling shader ‘[object WebGLShader]‘:ERROR: 0:21: ‘v_texCoord’ : undeclared identifier
    ERROR: 0:21: ‘a_texCoord’ : undeclared identifier
    webgl-utils.js (ligne 54)
    Please help, I don’t understand this error…

  • http://greggman.com greggman

    It sounds like you’re missing the lines at the top of your shaders that declare those variables. The shaders should look like this

    
    
    attribute vec2 a_position;
    attribute vec2 a_texCoord;
    uniform vec2 u_resolution;
    varying vec2 v_texCoord;
    
    void main() {
       // convert the rectangle from pixels to 0.0 to 1.0
      vec2 zeroToOne = a_position / u_resolution;
    
      // convert from 0->1 to 0->2
      vec2 zeroToTwo = zeroToOne * 2.0;
    
      // convert from 0->2 to -1->+1 (clipspace)
      vec2 clipSpace = zeroToTwo - 1.0;
      gl_Position = vec4(clipSpace * vec2(1, -1), 0, 1);
    
      // pass the texCoord to the fragment shader
      // The GPU will interpolate this value between points.
      v_texCoord = a_texCoord;
    }
    
    
    
    precision mediump float;
    
    // our texture
    uniform sampler2D u_image;
    
    // the texCoords passed in from the vertex shader.
    varying vec2 v_texCoord;
    
    void main() {
      gl_FragColor = texture2D(u_image, v_texCoord);
    }
    
    
    
  • Diego

    With this fragment shader we can draw pictures but no more plain color rectangles, isn’t it ?

  • http://greggman.com greggman

    In general yes. It only draws pictures. You can draw plain colors by making single pixel textures though.

  • Ippe

    Thanks for a great post. It’s a bit of a vague question, but could you give some tips about how to allow for zooming with mouse scrolling? I have a large image (~50Mpx), which I want to display at lower res, but be able to navigate the image, zooming in arbitrarily close using the mouse. Any help much appreciated! Thanks again.

  • http://greggman.com greggman

    Sounds like you’d need to break it into smaller images and different resolution like Google maps does?