Blog

WebGL Camp

I did a talk at webgl camp in switzerland, that was really nice to meet other people doing webgl stuff. Amazing projects seen there like quake4 in the browser, the nokia demo, really interesting projects. I talked about how shaders were generated in sketchfab.com and you can find my slides here

For the list of all talk have look on the website

Hope to meet webgl campers again in a session in europe.

Nouvelle Vague – behind the scene

Nouvelle Vague is a project I worked on for ultranoir, It offers a poetic and interactive real-time 3D experience based on Twitter. In a minimalist and surrealist world, Tweets are carried out with different flying objects from the borders of the scene to the center where stands the ultranoir black statue (tweets are retrieved from your selected hashtag).

Flying objects are air balloons, biplane, UFOs, zeppelins, balloons. Each has its own speed and specific paths. The user can select any of these ships to take advantage of the pilot’s view and explore the scene. In this post I will explain how we made it.

click for real time version here ( after the video intro ). If your browser does not support webgl you can watch a video version on youtube

Scene

We wanted to have vehicles coming from mountain/sky to the statue and leave a tweet. We did not want to manage vehicle collisions, after a while, we decided to organize scene and vehicles animations like the picture below.

The idea is to avoid vehicles to penetrate each other, for this we constrained each vehicle in a ‘row’ in which the animation will be played (off course the animation must be setup to fit in that virtual row). Doing this we minimized collision between vehicles but we knew it will not be 100% perfect and sometime you could see artifact near the statue.

Animation’s vehicle were played in loop mode. To add a bit of randomness I added a random delay at the beginning of a new loop to avoid vehicles to be synchronized together (eyes are very good to detect those pattern).
Animations were made using blender and to make it work with osgjs I had to update it to support keyframes container from osg, it also means that now the osgjs plugin for osg is able to export osgAnimation data. Why using this workflow ? In all my project I use OpenSceneGraph as a swiss knife, then I export data from osg to osgjs.

Drawing Texts

The text and logo on the ground were displayed using distance map. When you have vector shape with closed shape like text/logo it’s more efficient to use distance map instead of classical texture mapping. The advantage is that you can use less texture size and with better result than just bitmap. To do this you have to convert your original texture into a new one (distance map) then you use a ‘special’ shader to display it in realtime. Below you can see picture from the Valve paper, both image are at the same resolution.

(Pictures from valve paper)

The only problem I had was for the big text/logo in center of the scene, even with distance map I had aliasing because of the static ‘edge size’ in the shader.

float start = 0.5-edgeSize;
float end = 0.5+edgeSize;
float a = smoothstep(start, end, color);

To fix this, I adapted the ‘edgeSize’ depending on the camera position, it’s more a hack to fix aliasing than a real fix.

Camera

We implemented 2 cameras, one for the center of the scene to watch the statue and tweets and one in each vehicles. Camera in vehicles was more tricky because vehicles comes and back with the tweets. In the beginning I had a simple lookat camera that was located in the vehicle but looked to the statue. It worked but not really interesting, we wanted to be in the plane and see the looping. To do that I changed the camera to fps one. To tune camera and let the artist configure them, I added offset connected to html sliders.
Good but vehicles came in and went back after bringing their tweets, so the problem was that we were seeing an empty screen (the mountain) when the vehicles returned to their original position. We resolved this by changing the camera from ‘in vehicle’ to the camera ‘look to the statue from the vehicle’.
Finally for automatic mode we improved the camera to select the best camera available. It meant that we checked for each vehicles the time in their animation, and select the vehicles that has a time in ‘good’ range. Off course we had to tune the range for each animations of vehicles. You can see below the differents event in timeline for a vehicles.

Delay Random: random time before playing vehicle animation.
Leave tweet: time when the tweet box leave the vehicle and play the transition animation.
Camera cut: when the vehicle start to go back, we cut to watch the statue and tweets.
Camera invalid: camera invalid means the vehicle can’t be selected when the camera switch to a new one

Shadow

The ground was a plane so I took advantage of this to use flat textured quad that followed the position of the vehicle but at 0 in z. Shadow textures were generated by the artist then converted to distance map and finally used on the quad. We used distance map on those texture because it gave more control, for example the blur of the edge. It worked on most vehicles except for the plane because of its animations (the shadow would not follow the plane rotation). To fix this I used a matrix that project the shape of the plane onto the ground. Using this method meant no soft edges for shadow plus some artifact due to blending. Deadline make us to fix it later.

There is a shadow example in osgjs that explains differents shadow techniques.

Transitions

To make the tweets going from vehicles to the statue, we had to make a transition. I wanted to try something like disolving the tweet box to a lot of smaller cubes and then moved them like they are transported by wind to the statue.
I first setup the effect on this page and then improved it with a fake wind like in the demojs-fff. To finish I added a simple fade out when cubes are near the statue and voila.
The effect was not optimized and I used one 3d model per cube, that would be better to use pseudo instancied cube or pack all cube into one model and passing transformation to the shader with attributes or uniforms, again time…

Clouds

I wanted to try volumetric clouds on this project. For this I tried different method

  • mega particles (youtube video). Because of the shower doors effect I dropped this method after a few tries.
  • 3d volume textures, I started but I needed more time to implement it. The idea was to generate a 3d texture based on noise function then in realtime draw slice to represent the volume. I will try to release an example later.
  • Particles based. Like particles you draw different textured sprite with transparency. In this case you have to sort sprite from the camera position and render with blending enabled. I used this method because of time and it worked enough. On screenshots below you can see some test with tuning parameters of clouds.

Tools

One of the most important aspect when I worked on this project was how we setup and tuned effects. I wrote scripts to export 3d models, generate distancemap as usual, but the new tool I wrote for this project was to integrate automatic slider generation from shader parameters. To do this, I wrote functions that were able to check variable and type in shaders and from those informations created html slider elements that communicated directly with the shaders. To let artists tune effect and focus only on the desired effects I added and not think if they could lose or not their work, I saved value with localstorage. When artists were happy with the result they sent me by mail the value then I added their value as ‘defaults’ value. This process could be improved in futur with undo and save set of parameters, but even without that it was really convenient to let artist worked this way. You can see on the screenshot below the sliders used to fine tune the rendering effects.

You can try the developer version and play with sliders

DemoJS

We had a meeting a few months ago before the demojs event in Paris to organize it. I worked on the intro to announce the event 10 days before the deadline. 4 of us made this intro: Guillaume Lecollinet who helped on design and css stuff, Ulrick for the music and Mestaty for 3d models, both are from FRequency demo group and I worked on the code. If you are interested in particles you really need to read this blog. This guy does awesome things.

Particles again

At the beginning I did not really know what I wanted to create. I wanted to work on particles but with more complexity than my previous toy. Finally I did an intro only with particles. The consequence is that the entire intro used the same shader, I will describe the following stuff I used into the intro.

  • Verlet physic integration
  • Spawning particles
  • Distance map
  • Velocity field
  • Morphing of 3d models

Verlet Integration

Verlet integration in a nutshell is a numerical method used to integrate Newton’s equations of motion. There is a good blog and examples how to use it. In webgl we can’t use render to texture on floating point texture. In fact we can use an extension but I wanted to make it works on most browser with webgl so I did not use the extension. The consequence is that particles coordinates has to be encoded in specific format on rgba pixels.

In my previous particles toy I used 16 bits fixed point to encode coordinates, but on this one I wanted to improve it and try 24 bits to have more precision, I encoded more informations like signed distance, life of particle, or material id in pixels. (picture above left). In webgl there is no multi render target and I had to draw the scene 3 times to compute particle’s positions, for x, y and z. To select each dimension I wanted I used a uniform.
Finally to compute a ‘next’ frame (3 textures) it required ‘current’ frame (3 textures) ‘previous’ frame (3 textures), in final I needed 9 textures to just have the verlet physic running without controlling their motions. For this I used others textures I will describe after. Texture size . To not hurt too much my gpu, I fixed the texture’s size to 512×512, meaning 262144 particles. We could

Spawning particles

To determine the life span and position of new particles, I used uv range of particles to distributes them in space. It’s not really elegant or pratical for bigger projects/shapes. For example, the equalizer scene was done allowing particles on a plane where equalizers were. Basically there is a range 0.25 in ‘u’ per equalizer bar and I limited the v to 0.5. So we have 0.25*(4 equalizer) and v limited in 0.5 it means 0.25*4u + 0.5v = 131072 particles allocated for equalizers, and the 131072 others are used for the 3d models. Next time I would like to try ‘mesh emitter’ or something more useful than doing it manually.

Distance Map

What is a distance map ? you can read this paper from valve that explains how it works.
Distance map is a really useful tool to control particles. In the intro I used texture that encodes distance map and gradient (the vector that tells you which direction to take to go to the nearest point on the shape). For this I created a tool (DistanceMapGenerator) and then I computed the gradient from the distance map. Finally I constructed a texture that contains both pieces of information. During the computation of the position I take the signed distance of this position to fit the shape I want, eg:


vec3 getDirection(vec3 pos) {
vec4 d = texture2D( DistanceMap, vec2(pos.x, pos.z));
vec2 grad = d.rg;
vec3 dir = vec3(0.5-grad[0], 0.125*(0.5-pos.y), 0.5-grad[1]);
dir = normalize(dir);
return dir;
}
float getDistance(vec3 pos) {
float d = texture2D( DistanceMap, vec2(pos.x, pos.z)).b;
return d;
}
// here I know at wich distance my particle is from the nearest border
distance = getDistance(currentPosition)*weightDistanceMap;
// and here I know in wich direction the nearest border is
direction = getDirection(currentPosition)*weightDistanceMap*0.4;
// it's easy after to use this direction to create a force and make the
// particle go in the direction of the shape

This technique was used for most of the motions/shapes I wanted the particles to fit in. I tried to manipulate particles manually but it was too complex and I was not able to do what I wanted to. Distance maps is really easier.

Velocity field

To add some perturbation motion like ‘procedural wind’ I used a MathGL/udav tool. The idea was to find a nice formula I could use in the shader that produces nice motion. For that I used udav to display the vector field from the formula. Once I was happy with the vector field, I added some variation in real time depending on time. This tool was not really convenient and maybe next time I will write something to help me with this. Once the formula was selected I used a lookup to get my vector depending on particle’s position. It looks like this below:


"vec3 getVelocityField(vec3 pos) {",
" float t = mod(time,15.0); //mod(time, 5.0);",
" float vx = 0.0+cos(0.5+2.0*(pos.x*pos.x*t));",
" float vy = cos(4.0*(pos.y*t+ seed*0.5)) + seed * sin(4.0*pos.x*t*t);",
" float vz = cos(pos.z*2.0*t);",
" vec3 vel = vec3( vx, vy, vz);",
" return normalize(vel);",
"}",

3D models

At the end of the intro I used morphing between different 3d models ( the firefox logo, and the abstract model formed of cube ). To use those models with particles I first had to convert them into a suitable format for the particle system, meaning into textures that would encode the model’s position as rgb pixels. The particle system used 262k particles but models used up to 131k vertexes ( remember 131k particles were allocated for the equalizers ). So we have 131k particles to display morph and animate our 3d models. The morphing between the different shapes works with a lerp between position ( finalVertex = model0*t + model1*(1.0-t) ). To add some perturbation to the motion we still add the ‘fake wind’ during the animation. If you want to check the tool to build vertex to texture format used by the particle system look here. It’s a plugin for openscenegraph.

Music

The music was done by Ulrick from FRequency and they used their own tool to export pattern events in a c++ header. I made a little script to convert the result into json, and then I injected events data into timeline.js. Timeline.js was great but I needed to patch it to support callback and use an external time, the one that came from the music.

Improvements

There is a lot of stuff I would have wanted to do better but 10 days was too short. So I discarded lighting on particles, shadow, spawn mesh emitter, post process effect, smoke simulation with sph. Maybe the next time I will play with particles I will be able to add some of those elements.

links

Thanks for people who helps to make this webgl intro it was really fun. The most stuff I liked was the good ambience of the team, that was really cool. Thank you guys :)

GlobeTweeter – Experience

Over the last few months, I have been working on a webgl demo for firefox. The objective was to create a demo to showcase webgl technology. I am currently working on a 3D framework called osgjs so the application uses this javascript library. osgjs is a javascript implementation of OpenSceneGraph and helps to manage 3d scenes and webgl states. You can get more information on the website.

We did different experiments before we ended up with globetweeter, I kept some of them for history :)

Jurassic Park

I started off by creating a file system similar to the 3D file system used in Jurassic Park. I had to figure out the best type of camera that would be suitable to use with the system.

The idea is simple. Let’s say a user selects the item B . When selected, the camera moves from its current viewpoint to the chosen item. The position of the camera (C) is in orbit relative to the selected item. Basically I used a lookat matrix from the camera position (C) to the target item. To create the camera motion when moving from one item to another, I interpolated the target position (from A to B) and generated a rotation around this interpolated point (X) during the animation. I added some constraints like the distance from the target point and some limits in the rotation to keep the camera position in range (like if we would see the item from a 3rd person). Check out this experiment (use ‘del’ key to go to previous level).

Twitter

This idea ended up being too geeky, so we tried out something more popular and surrounded by more hype. And therein was born the idea of displaying tweets with 3D.
A first try was to iterate on something like tweet deck but we wanted something that would be more responsive and with eye candy features… the first ugly experiment was to render tweets in a canvas and to use them as texture in 3D. We ended up dropping this idea and instead decided to show tweets geo-localized on the earth.

The last idea we had is the current incarnation of the globe tweeter. To make the globe I used 3 data files from natural earth data

As you can see, those data are flat and need to be projected on a sphere. Before projecting those data, however, I had to tesselate the triangles in order to have enough vertex to project a clean shape on the sphere. For this I have created a tool called ‘grid’. It tessellates the input shape with a grid. It’s a kind of boolean union operation.

Above on the left, you can see the white model that is the original ’110m admin 0 countries.shp’. The black model is the same model but tesselated a bit more to fit more closely on the sphere. On the right is the model (grid) use to tessellate the original ’110m admin 0 countries.shp’. The idea is to add subdivisions on the height section of the model.

Once the data is subdivided enough, I created a tool to project each vertex onto a sphere using the standard WGS84 projection. You can see a webgl version of the projected model by clicking on the picture.

Once the data below were ready we selected a nice color for each model. On the demo I drew the globe in two passes. The first pass drew back faces of ’110m admin 0 countries’ with ‘back color’ and the second drew the front faces with ‘front color’. It was necessary to have transparency of the globe because of the blending mode ‘One Minus Src Alpha’.

The Final result looks like this

SceneGraph representation

Wave

To add some details about twitter activity I setup a simple wave physics simulation that produces waves where tweets appear. The algorithm to produce the waves is explained here. To accomplish this I used two small hidden canvases with a size of 128×64 , I used small canvases because the computation is done on javascsipt and can be expensive. The update of waves was updated every 1/30 seconds. The update function did the following operations:

  • Convert tweets locations into source wave in the canvas.
  • Do the physics computation and store the result into the current canvas.
  • Upload the current canvas as texture to use in the vertex shader.
    The vertex shader used this texture as a heightmap. To understand better how the heightmap works you can see here the original model without waves.

Vertex Shader


#ifdef GL_ES
precision highp float;
#endif
attribute vec3 Vertex;
attribute vec3 TexCoord0;
uniform mat4 ModelViewMatrix;
uniform mat4 ProjectionMatrix;
uniform mat4 NormalMatrix;
uniform float scale;
uniform sampler2D Texture0;
varying float height;
float maxHeight = 1400000.0;
void main(void) {
    vec4 color = texture2D( Texture0, TexCoord0.xy);
    height = color[0];
    vec3 normal = normalize(Vertex);
    vec3 normalTransformed = vec3(NormalMatrix * vec4(normal,0.0));
    float dotComputed = dot(normalTransformed, vec3(0,0,1));
    height *= max(0.0, dotComputed);
    vec4 vrt = vec4(Vertex +  normal * ( height * maxHeight * scale),1.0);
    gl_Position = ProjectionMatrix * ModelViewMatrix * vrt;
    height *= 5.0 * scale;
}

Fragment Shader


#ifdef GL_ES
precision highp float;
#endif
uniform vec4 fragColor;
varying float height;
void main(void) {
    gl_FragColor = fragColor * height;
}

This shader is applied on a regular grid model projected on the sphere as explained before but this time the grid has a better resolution. Some of you will not see the relief of the waves because some webgl implementation does not expose texture unit on the vertex shader. Therefore as a work around I made another shader that does not move the vertexes in the vertex shader. Instead it only changes the color of the vertexes. You can read more about this issue on the Angle project.

Vertex Shader


#ifdef GL_ES
precision highp float;
#endif
attribute vec3 Vertex;
attribute vec3 TexCoord0;
uniform mat4 ModelViewMatrix;
uniform mat4 ProjectionMatrix;
uniform mat4 NormalMatrix;
varying float dotComputed;
varying vec2 TexCoordFragment;
void main(void) {
    TexCoordFragment = TexCoord0.xy;
    vec3 normal = normalize(Vertex);
    vec3 normalTransformed = vec3(NormalMatrix * vec4(normal,0.0));
    dotComputed = max(0.0, dot(normalTransformed, vec3(0,0,1)));
    if (dotComputed > 0.001) {
        dotComputed = 1.0;
    }
    gl_Position = ProjectionMatrix * ModelViewMatrix * vec4(Vertex, 1);
}

Fragment Shader


#ifdef GL_ES
precision highp float;
#endif
uniform sampler2D Texture0;
uniform vec4 fragColor;
uniform float scale;
varying float dotComputed;
varying vec2 TexCoordFragment;
void main(void) {
    vec4 color = texture2D( Texture0, TexCoordFragment.xy);
    gl_FragColor = fragColor * min(2.0*dotComputed * color.x, 0.999999);
}

Yes it’s a bit sad, I have seen this issue lately … :( . As conclusion this effect works well but it takes too much cpu in javascript/canvas, I should have tried a different effect that was less cpu intensive. Have a look at this video if you can’t see the waves’s relief.

Tweets

Tweets are displayed with the avatar image with simple quad oriented and positioned on the sphere from latitude/longitude. To add a nice border around the image I used a blending operation in the canvas with the following image.

Finally to have a nice animation when a tweet appears and disappears, I used an EaseInQuad function for the color, and EaseOutElastic for the scale component.


EaseInQuad = function(t) { return (t*t); };
EaseOutElastic = function(t) { return Math.pow(2.0, -10.0*t) *
                                         Math.sin((t-0.3/4.0) *
                                         (2.0*Math.PI) / 0.3) + 1.0; };

Zooming to the earth made tweet really huge related to the screen. To prevent this effect I introduced a scale factor that depends on the camera altitude. The full code to update a tweet looks like this


update: function(node, nv) {
    var ratio = 0;
    var currentTime = nv.getFrameStamp().getSimulationTime();
    if (node.startTime === undefined) {
        node.startTime = currentTime;
        if (node.duration === undefined) {
            node.duration = 5.0;
        }
    }

    var dt = currentTime - node.startTime;
    if (dt > node.duration) {
        node.setNodeMask(0);
        return;
    }
    ratio = dt/node.duration;
    if (node.originalMatrix) {
        var scale;
        if (dt > 1.0) {
            scale = 1.0;
        } else {
            scale = osgAnimation.EaseOutElastic(dt);
        }

        scale *= (this.manipulator.height/this.WGS_84_RADIUS_EQUATOR);
        if (this.manipulator.height > this.limit) {
           var limitConst = 0.8/(2.5*this.WGS_84_RADIUS_EQUATOR-this.limit);
           var rr = 1.0 - (this.manipulator.height-this.limit) * limitConst;
           scale *= rr;
        }
        var scaleMatrix = osg.Matrix.makeScale(scale, scale, scale);
        node.setMatrix(osg.Matrix.mult(node.originalMatrix, scaleMatrix));
    }

    var value = (1.0 - osgAnimation.EaseInQuad(ratio));
    var uniform = node.uniform;
    var c = [value, value, value, value];
    uniform.set(c);
    node.traverse(nv);
}


NodeJS

The server responsible for sending tweets to the clients is done with nodejs. I used twitter-node, socket.io, and express modules to build the server. The code is really short so you can have a look on the server directly. You can get the server code here and improve it :) A big huggy to proppy who bootstraps the the nodejs server \o/

Stats

The first graph shows the number of connections per day. There is a big spike when the news was broadcasted. The second graph shows the number of connections per day but with a smaller scale and the last graph shows the cumulated number of connections.

Links

A big thanks to Paul Rouget from Mozilla who made this demo possible and Guillaume Lecollinet who designed this demo.

ParisJS – Inside GlobeTweeter

For ParisJS #6 I did a conference about GlobeTweeter, and an overview of code inside the application. I did the slides with dzslides from Paul Rouget, thanks for this tool :)