Friends link: exam 70-450 cert 9a0-125 Adobe 9A0-127 exam Apple 9l0-410 Certification a00-202 Exam dumps IBM a2010-570 Exam dumps Pass A2160-669 Exam asc-029 Exam Dumps BCP-240 Actual Exam ISEB bh0-006 exam
GLOW » Build With Chrome: GLego
canada goose jacke parajumpers ugo parajumpers jas ugo canada goose uk moncler outlet
2012+09+04

Build With Chrome: GLego

I was driving too fast on the highway when the phone rang and I was asked by North Kingdom if I could help with a very specific task: to render as many Lego bricks as possible, using WebGL. I had no clue how to do it, but didn’t hesitate one second – of course, I’m in. Later that day, an idea landed and started to grow. After a lot of twists and turns, it ended up as Build With Chrome and a render technique I’ll try to explain here. But first…

GLOW as Boilerplate

The GLego-framework uses GLOW as boilerplate (that’s why this is posted on this blog) and it worked out pretty well. If you’re new to GLOW, it is a low level WebGL wrapper that does pretty much nothing but wraps the (sometimes) daunting WebGL API into something more readable.

There were some kinks that needed some ironing and missing features, which were added during the development. It’s all in the Github-repo now if you like to use it. There might be some features missing but GLOW is pretty much “done” and ready to use in productions.

Unity as Level Editor

Even though our geometry is very simple, we needed a good middle man between whatever 3D-package the artists used and WebGL. We turned to the fantastic J3D-Unity exporter by Bartek Drozdz, modified it to our needs and was up and running. I highly recommend using this path, especially if you’re doing complex levels and stuff. You should be able to use the free version of Unity, without any practical limitations.

The Build Renderer

We needed two renderers – one for the build mode and one for the browse mode. The build mode renderer is just an ordinary renderer, using geometry and lights like most renderers do. To avoid WebGL state switching and keep performance up, the drawing order is optimized so the same type of geometry, colors and shaders is rendered in sequence. GLOW does the heavy lifting here, keeping track of all (or most) WebGL-states. The render loop pretty much looks like…

var type = renderCue.length,
  numColors = renderCue[ 0 ].length,
  color, objects, numObjects, object, c;

// draw all objects in type-color-order
while( type-- ) {
  color = numColors;
  while( color-- ) {
    objects = renderCue[ type ][ color ];
    numObjects = objects.length;
    while( numObjects-- ) {
      if( objects[ numObjects ].visible ) {
        objects[ numObjects ].draw();
      }
    }
  }
}

// draw all custom objects, using blend
glowContext.enableBlend( true );
c = customRenderCue.length;
while( c-- ) {
  if( customRenderCue[ c ].visible ) {
    customRenderCue[ c ].draw();
  }
}
glowContext.enableBlend( false );

// done!

There are no post-effects or anything, just plain simple… rendering. One of the real benefits of using GLOW is that your render loop becomes very readable.

The Browse Renderer – Data

The idea that landed that first day, spawned from how Lego-bricks are built around a very simple (genius) pattern. A brick has eight sides (only seven visible in Build with Chrome), a position, size and number of pegs that varies in discrete steps – we call them Lego Units (LU). A brick can only be rotated in steps of 90 degrees, which we don’t do in Build with Chrome – we simply switch width and depth. To describe a brick you simply need…

  • Position: X, Y, Z in LU
  • Size: Width, height, depth in LU
  • Color

Each building is placed on a baseplate, so the position for a single brick can be described in local baseplate coordinates and because no baseplate is bigger than 256 LU (it’s actually 32×32), each element of the position only needs an unsigned byte. This goes for the size as well. And to make things even better, there are a limited amount of types of bricks so width, height and depth can all be described with a type index, which fits in a byte. Finally, because there are a limited amount of Lego-colors, the color can be described as an index, which fits within a byte, too. So in the end, you only need 5 bytes to describe one brick:

Position X, Y, Z in LU, Type and Color.

There are two huge upsides to this extremely compressed format: you can package and compress the data using PNG-images for fast data transfer (i think most buildings fit within a 50×50 pixels image).

You can (with some effort, admittedly) convert this into a single WebGL vec4 attribute and generate the geometry inside of a vertex shader, making it possible to render thousands of bricks in one single draw call.

Browse Renderer – Deferred Rendering, Part 1

To add even more complexity, the initial designs included some depth of field (DoF), vignetting and there was talk about using screen space ambient occlusion (SSAO) to get some kind of shadowing. SSAO didn’t make it in, due to lack of time and depth information (more about that soon).

Early on we had to make a decision: to go with a deferred approach or not. The cons were quite few – the obvious is lack of anti-aliasing, possibly multiple render passes and, it would turn out, some data problems. The pros includes less lighting calculations and simplified shaders, meaning more speed, which was a high priority. We went deferred.

We managed to find the holy grail of deferred rendering in WebGL – how to cope with a single rendering pass without having multiple render targets (MRT). The shaders simply outputs…

R = depth
G = (diffuse) color index
B = screen space normal X
A = screen space normal Y

This all works incredibly well with floating point textures (we could have gotten SSAO to work with this, if we’d gotten the production time) and works quite OK with normal unsigned byte textures. 8 bit precision on the depth doesn’t work for SSAO but is good enough for DoF. The live version of Build with Chrome uses unsigned byte textures. This could be updated so machines with support for floating point textures gets SSAO, for example.

The data problem mentioned above, comes from having to deal with color as an index. All textures need to be converted to index textures and all use the same palette. In the expand shaders (more about that soon) we simply use the index as an UV-coordinate and sample the actual color from the palette texture. The palette texture is one pixel high and 256 pixels wide.

Browse Renderer – Deferred Rendering, Part 2

Because there was DoF in the design this means the post shader, which is responsible for putting out the final picture, had to sample a lot of other fragments for each fragment it put out (the same goes for SSAO and FXAA, the anti-aliasing technique we use).

Because the deferred shaders put out such limited information and there’s need to process this data before you can actually use it in a meaningful way, we invented something we call expand shaders. There are three expand shaders, which all are very simple in themselves, that converts the deferred buffer into three buffers containing:

  • Camera relative position of a pixel
  • Normal of a pixel
  • Color of a pixel

Or at least in theory. In practice we didn’t need all this information and removed some of this, to optimize. Again, this works amazingly good with floating point textures and OK with unsigned byte textures (the position buffer is very approximate, to say the least). These three buffers are then sent to the post shader, which does all the compositing and lighting.

Browse Renderer – Deferred Rendering, Part 3

This will mainly be about the pegs, but first a little about light. We only use directional lights and we solved this using a separate light pass. It’s a simple shader that renders what could be described as a camera projected, lit ball to a texture (there’s no geometry, but the result looks like a lit ball plus some). This texture then simply becomes a lookup table for the screen space normal that the deferred shader put out. This means we can have almost as many lights we like, with very little per-pixel cost.

Ok, the pegs. From day one we knew that real geometry wasn’t an option – it’s simply too much to calculate and draw. So we pulled a trick that we came to call ”camera mapping”. Not sure it’s the right terminology. Anyway, what we do is we render a single, high resolution peg to a texture, using an orthographic camera. (Side note: we only render the screen space normals (BA-channel). The other two channels are used to create edges that mark the tiny gap between the bricks, but that’s another story).

To get the mapping right, you use a screen projected version of the UV-coordinate. In Build with Chrome, this projection is actually done in JavaScript as there’s only four UV coordinates in the entire world.

Now, that might sound simple but when you have bricks that have more than one peg (most of them), you need to get the UV-wrapping to work for you as the top side is just two triangles, no matter how many pegs it has. This took some time to figure out, but in the middle of the night, it all came to me. What you need to do is to “unwrap” it…

The resulting buffer looks really weird, but because the process is reversed on screen it all looks dandy in the end.

There are some real limitations to the camera mapping technique. First you can’t look towards the horizon, as the pegs have no height. Secondly, pegs in the edges of the screen becomes slightly distorted. This explains the tight perspective the browse mode has. I think we can push it and make a more free browse camera, but we decided to stay on the safe side right now. Hopefully we’ll get the chance to improve this later on.

Wrap Up

I found two interesting/annoying bugs during the development:

1. If you have an FBO that uses a depth buffer, you also need to have the stencil buffer attached or the depth buffer will fail on certain Macs. Probably there’s no performance loss if you have stencil write disabled.
2. On some other Macs, using ints in shaders fail. You have to use floor( theFloatValue ).

This was (and hopefully continues to be) one of the most awesome projects I’ve been part of. Thanks to all wonderful people that was involved!

For you geeks who are really interested in WebGL and like to know more about the details, please don’t hesitate to contact me on twitter.