I have seen various implementations of the Voronoi Diagram. Perhaps you’ve seen one without knowing what it was. It almost looks like random stained glass:
In mathematics, a Voronoi diagram is a partitioning of a plane into regions based on distance to points in a specific subset of the plane.
It’s even possible to create a Voronoi diagram by hand, as eLVirus88 has documented.
I wanted to give it a try.
My idea is to chop up a video into fragmented parts (called cells) and put them into 3D space on a slightly different z-axis. Then, by moving the mouse, you would rotate the whole experience so you would see the cells in different depths.
I choose to use the
element, and put each of the cells on different canvas on a differnet 3D plane through CSS.
The Voronoi library takes care of computing all the sites to cells and creating objects with the vertices and edges for us to work with.
Cells to Canvases
First we create the canvases to match the number of Voronoi cells. These will be rendered to the DOM. The canvases and their respective contexts will be saved to an array.
var canv = document.createElement('canvas'); canv.id = 'mirror-'+i; canv.width = canvasWidth; canv.height = canvasHeight; // Append to DOM document.body.appendChild(canv); document.getElementById('container-mirrors').appendChild(canv); // Push to array canvasArray.push(canv); contextArray.push(canv.getContext('2d'));
All of the canvases are now a copy of the video.
The desired effect is to show one cell per canvas. The Voronoi library provides us with a
compute function. When providing the sites with the bounds we get a detailed object where we extract all of the cells edges. These will be used to create a cut out to each section using the globalCompositeOperation.
// Compute diagram = voronoi.compute(sites, bounds); // Find cell for (i=0;i
Displaying video to the canvas only takes a couple of lines of code. This will be executed on
v = document.getElementById('video'); ctx.drawImage(v,0,0,960,540);
It's also possible to use a video input source (like a webcam), but I didn't like the result as much for this demo. If you would like to know how to use the webcam to draw to canvas using the
getUserMedia() method you can read about it here.
To optimise video drawing performance skip a few frames in between the
requestAnimationFrame. Videos for the web are usually encoded with a frame rate not higher than 30 fps.
See the Pen Fragmented HTML5 Video - Demo 1 by virgilspruit (@Virgilspruit) on CodePen.
Demos like this are my favorite things to do. Seeing what's out there and adding your own layer of interactivity to it. I'm looking forward to seeing what other people will be doing with this nice visual algorithm.
See the Pen Fragmented HTML5 Video - Demo 2 by virgilspruit (@Virgilspruit) on CodePen.
See the Pen Fragmented HTML5 Video - Demo 3 by virgilspruit (@Virgilspruit) on CodePen.
View Demos GitHub Repo