# Kinematics

Posted: November 26th, 2013 | Author: | Filed under: 3dprinting, clothing, design, jewelry, simulation, software | Tags: | 13 Comments »

Kinematics is a system for 4D printing that creates complex, foldable forms composed of articulated modules. The system provides a way to turn any three-dimensional shape into a flexible structure using 3D printing. Kinematics combines computational geometry techniques with rigid body physics and customization. Practically, Kinematics allows us to take large objects and compress them down for 3D printing through simulation. It also enables the production of intricately patterned wearables that conform flexibly to the body.

Today we are releasing a jewelry collection and an accompanying customization app built upon our Kinematics concept. We’re also releasing a free to use app for desktop 3D printers.

Kinematics is a branch of mechanics that describes the motion of objects, often described as the “geometry of motion.” We use the term Kinematics to allude to the core of the project, the use of simulation to model the movement of complex assemblages of jointed parts.

Kinematics produces designs composed of 10’s to 1000’s of unique components that interlock to construct dynamic, mechanical structures. Each component is rigid, but in aggregate they behave as a continuous fabric. Though made of many distinct pieces, these designs require no assembly. Instead the hinge mechanisms are 3D printed in-place and work straight out of the machine.

This project evolved out of a collaboration with Motorola’s Advanced Technology and Projects group which challenged us to create in-person customization experiences for low cost 3D printers. The genesis of the project is discussed at length in The Making of Kinematics.

### a tale of two apps

We are releasing two web-based applications: Kinematics and a simplified version called Kinematics @ Home which is completely free to use.

The Kinematics app allows for the creation of necklaces, bracelets and earrings. Users can sculpt the shape of their jewelry and control the density of the pattern. Designs created with Kinematics can be ordered in polished 3D-printed nylon in a variety of colors.

The Kinematics @ Home app is targeted at people who already have access to a 3D printer. It’s our first app that allows users to download an STL file for home printing. Enter your wrist size, style your bracelet and click print to receive a free STL file suitable for printing on a Makerbot or similar desktop printer.

kinematics@home bracelets printed on a makerbot

### Kinematics case study: making a dress

Concurrently with the development of the online applications, we’ve been working on a more advanced software with broader practical applications. Kinematics allows us to design a shape and then fold it into a more compressed form for 3D printing. Items we’ve created so far are flexible, but rigid objects could be created by introducing a hinge joint that locks at a preferred angle. Here we present an example of how Kinematics can be used to create a flexible dress that can be printed in one piece.

The process begins with a 3D scan of the client. This produces an accurate 3D model of the body upon which we draw the form of the desired dress. For this example, the top of the dress conforms exactly to the torso, but the skirt has a larger silhouette, allowing for the dress to drape and flow as the wearer moves.

The surface of the sketched dress is then tessellated with a pattern of triangles. The size of the triangles can be customized by the designer to produce different aesthetic effects as well as different qualities of movement in the dress (the smaller the triangle, the more flexible the structure / the more fabric like it behaves). Next we generate the kinematics structure from the tessellation. Each triangle becomes a panel connected to its neighbors by hinges. The designer can apply different module styles to these panels to create further aesthetic effects.

Finally, we compress the design via simulation so it fits into a 3D printer. This means that an entire gown, much larger than the printer itself, can be produced in a single assembled piece. The simulation uses rigid body physics to accurately model the folding behavior of the design’s nearly 3,000 unique, interconnected parts and find a configuration that fits inside the volume of the printer.

### the collection

Each jewelry design is a complex assemblage of hinged, triangular parts that behave as a continuous fabric in aggregate. Kinematics jewelry conforms closely to the contours of the human body. This is 21st-century jewelry, designed and manufactured using techniques that did not exist just a few years ago.

Kinematics pieces come in four styles: smooth, angular, polygonal and tetrahedral. Each design takes its name from the module style and number of pieces in the design. For example, Tetra Kinematics 174-n is a tetrahedral style necklace composed of 174 unique modules.

kinematics necklaces with smooth, tetra and polygonal modules

We’ve added eighteen Kinematics designs to our shop, and a limited initial run of each is currently available for purchase. Kinematics jewelry is made of polished 3D printed nylon in a variety of colors. Necklace, earring and bracelet designs are available; the bracelets and necklaces are fastened simply and securely with hidden magnetic clasps. Prices for the collection range from $25 to$350 and most pieces cost less than \$100.

# The Making of Kinematics

Posted: November 26th, 2013 | Author: | Filed under: 3dprinting, simulation, software | Tags: | 1 Comment »

Most of our projects start with a natural inspiration, but Kinematics emerged from a very different perspective. This project started with a technical problem: how can we create large objects quickly on a desktop 3D printer?

### Motorola

Last May, Motorola’s Advanced Technology and Projects division invited us to their headquarters in Sunnyvale to discuss a potential collaboration. They wanted us to develop “aesthetic generators” that related to their new phone, the Moto X. The catch was that these apps needed to generate customized objects that could be 3D printed in under an hour on equipment that was being driven around the country in the MAKEwithMOTO van. Despite what you may have been told, 3D printing is not a particularly fast process. In fact, the more three dimensional an object is, the slower it prints. One hour is a very challenging print time to meet for an object of any significant size.

The question we asked ourselves was how could we create something that was nearly flat, but still took advantage of the new possibilities in 3D printing. Our solution was to print a flat design that could be folded into another shape after printing. What we ended up creating was a NFC-enable bracelet made of a foldable geometric pattern.

### Physical Prototyping

From the beginning, this project was focused making the most of the limitations of low-cost 3D printers. Unlike most of our work, which occurs almost entirely digitally before we see a real object, this required extensive physical prototyping. We used our MakerBot Replicator (v1, dual extruder) throughout the prototyping period to develop and refine our concept.

Initially, we weren’t sure it was possible to design interlocking components that a desktop 3D printer could accurately reproduce while being small enough to comfortably wearable. But looking around the 3D printing community site Thingiverse, we found a diverse array of flexible structures all designed to be 3d-printed on low cost machines. Starting from there, we knew that it could be done.

We began by modelling a hinged joint mechanism based on a double-ended cone pin and socket. Cone-based geometry works well because, with the correct angle, it is self supporting, an essential quality for low-cost home printing. We spent a lot of time tweaking tolerances to get the hinge just right: tight enough to not fall apart but loose enough to not fuse together during printing. We kept refining the joint until it was as small as it could be and still print reliably.

With the joint designed, we started out printing simple chains of components. These basic configurations were already fun to play with, but we suspected they could be much more compelling. Taking origami tessellations as inspiration, we started making triangulated, foldable surfaces. Beginning with a regular tiling of equilateral triangles, we modeled the first assemblages entirely by hand. By using hinges to connect together small triangular panels, we were able to create a faceted, fabric-like material.

However, even modelling a simple, repetitive pattern is time consuming and difficult. Before we could continue, we needed to automate the generation of the hinge mechanisms on arbitrarily complex patterns. With that done, we could start to design tools that would let anyone morph and shape a pattern to create their own fabric-like creation. Early experiments also tried different ways we could style the modules or incorporate the multi-material extrusion available on newer desktop printers.

The results were compelling. Not only were were the pieces themselves addictive to play with, but it served as a case study in customization. Using the most inexpensive home printers, we could make complex, fully customized products in under an hour. However, as we worked on the project we realized the Kinematics system opened up a lot more possibilities.

### Tessellation to Kinematics

an early version of the tessellation app for Motorola

The original application for Motorola had many restrictions. In addition to the driving constraint of needing to quickly produce objects using a low-cost 3D printer, it also had to be used in-person in a van driving around the country. The geometry had to be limited to small objects that consistently printed well. The experience also had to be limited and highly directed. When someone used the app, their first step was walking up to a strange van filled with 3D printers, probably with no idea what was going on. So the app had to convey a lot of information: what someone was making, why they were making it, how to customize it, and how to get it printed. Because so much had to be presented in a short period of time, it was necessary to make the procedure very linear.

Freed from these constraints, we were able to develop a version of the app that was much more open-ended, both in terms of the geometry and the experience. We designed a new hinge for our 3D printing method of choice, SLS. This allowed us to create larger pieces and modules with more complex shapes. We also completely changed how the pieces were designed. Instead of morphing a fixed tessellation, users can manipulate parametric curves to create various shapes that are tessellated on the fly. They can also dramatically adjust the density of triangles, making the results more varied and freeform.

The most exciting thing about switching from extrusion-based to powder-based printing was that we could now design objects that were not self supporting. Though kinematics was originally developed to print three dimensional objects flat, allowing objects to be anywhere in space opened up new possibilities. The fabric-like quality of the designs we were producing got us thinking about making larger three-dimensional wearables like dresses. We realized that Kinematics had broader implications for printing arbitrary objects. We can take any shape and transform it into a flexible structure. These structures can then be digitally folded into more compressed shapes enabling the construction of objects much larger than the 3D printer’s build volume.

a kinematisized stanford bunny

### Software Development

Kinematics is our first application that is written purely JavaScript. Our previous web apps evolved from work we had created in Processing and contained large portions facilitated by processing.js to transition to the web. This made for a messy development process. Kinematics was developed from the ground up to be a browser-based WebGL application, and the code does not rely on any frameworks.

The project makes use of two libraries. One is glMatrix, which we use in all our projects for vector and matrix operations. It is a simple and fast library that does one function and does it well. This is exactly the type of library I love to use: flexible enough to fit in any situation and not bloated with unnecessary functionality.

The other library we use is poly2tri, which produces constrained Delaunay triangulations. Kinematics requires well-shaped triangles inside of a designed boundary, so this is exactly the tool for the job. The JavaScript version is a port of the original C++ code. Unfortunately, I don’t think it is the most performant for modern Javascript, but I’m not going to make my own constrained Delaunay library, and poly2tri does the job.

We’ve also started internally developing modular code components which we can apply to other projects. glShader takes care of loading and processing of GLSL shader programs. It asynchronously loads external shader files and extracts all the attributes and uniforms from the shaders, providing helper functions to simplify working with WebGL.

The project uses a JavaScript NURBS library that we’re developing. This allowed us to design curves for the boundaries of the pieces in Rhino and then import them into Kinematics where users can interactively change them. We’re also expanding and refining tools we created before for loading and working with meshes in JavaScript

The simulation portion of the Kinematics project happens outside the browser. We use openFrameworks and BulletPhysics to perform the compression of Kinematics models. BulletPhysics is an open-source physics engine used primarily for rigid body mechanics in games. It is a powerful and fast tool for physics simulation, supporting constraints, collisions, forces, and even soft bodies. There is a browser-based port of Bullet that we are in the process of incorporating as well.

a computationally compressed dress

### acknowledgements

Special thanks to Motorola ATAP for getting us started down this road (especially Daniel, Andrew, and Paul). Thank you Thingiverse for inspiring us and Makerbot for giving us the printer we used for prototyping. And thank you to Artec 3d for proving the 3D scan we used to develop the concept dress.

# Calculating the volume of a mesh

Posted: November 4th, 2013 | Author: | Filed under: computation, geometry, software | 1 Comment »

In order to generate the price of a custom design on the fly, we need to calculate the volume of the piece for 3d printing. By constantly updating the volume, the customer gets instant feedback on how their changes are affecting price. Calculating the volume of a mesh is a relatively simple and well-known problem, and I’ll go over the straight forward case as well as an optimization we’ve incorporated into our latest project.

The Basics

The idea behind calculating the volume of a mesh is to calculate a volume for each triangle of the mesh and add them up. Now, a triangle itself does not have volume; it is two dimensional. Instead we calculate the volume of a tetrahedron which goes from the origin (0,0,0) to the triangle.

There is neat equation for calculating volume of a tetrahedron. Given a triangle’s points v1, v2, v3 the tetrahedrons volume is

$V = \frac16 (v_1 \times v_2) \cdot v_3$

Another way to express this is if we have a 3×3 matrix, where each row is one of our vertices the volume is a sixth of the determinant. The division by six comes from the fact that determinant is actually the volume of the parallelpiped formed by the three vectors, and you can stuff 6 tetrahedrons into the parallelpiped.

But wait, if I add up all these tetrahedrons don’t I get a mess of overlapping volumes that go to the origin? Yes, but the key thing is that these volumes are signed, so they can be negative depending on the vertex winding. If my points go one way (v1->v2->v3) I get a positive volume and if they go the other way (v1->v3->v2) I get a negative volume. Faces pointing out add to the total volume and faces pointing in subtract. What is left is only the volume inside my object. To get the total volume of a mesh, we go through each triangle, compute its signed volume, and add them up.

Below is some javascript code for computing the volume of a mesh stored in something like a Vertex Buffer Object.

Volume of repeated elements

Now, onto the good stuff. What happens if you have an object that is made of (at least in part) an aggregate of a bunch of identical but complex parts. I don’t mean a booleaning together primitives, but you could imagine something like a buckyball where each face is articulated with some kind of intricate shape. The brute force approach would be to move and rotate the shape to the proper position then go through each triangle and calculate the volume. This means you have to calculate a transform on each of the points of your shape, and then go through each triangle. If your shape has 1000 triangles and you have 100 shapes, that ends up being a lot calculation. We can drastically increase the efficiency of this by computing a “general volume” for the shape once, and applying our transforms only to that simplified representation. But what does this general volume look like?

Rotation

The key idea behind this general volume is the fact that volume is rotation invariant. This is one of the basic results of differential geometry. It is intuitively obvious; no matter how I orient an object in space its volume does not change. What is less intuitive is that the same thing holds true for the signed volume of open shapes. Mathematically this can be seen easily by noting that the volume is the determinant of a matrix, and the rotation matrix has a determinant of 1. The determinant of one matrix multiplied by another is the multiplication of their individual determinants. So, I can rotate my primitive element however I want, and the volume stays the same. If I was only rotating my shape, then I could calculate the volume of my shape once and multiply it by the number of shapes I have.

Translation

That only leaves translation or moving my shape around in space. We can look at this intuitively in a simplified 2D example with the area of a single line segment. The area of a triangle is one half base time height. If I translate my line segment in the x direction by some amount, I am adding that amount to my height. So the area of my translated line is:

$A_t = \frac12 b (h+x) = A + \frac12 bx$

I take my original area, and I add on some amount multiplied by my x translation. Though this is a very dumbed down example, we can do essentially the same thing for each axis (x,y,z) in our volume.

Nitty gritty

To see how this idea applies to our volume calculation, we can look at the expanded equation for the volume of each triangle, where $v_1 = (x_1,y_1,z_1)$

$V = x_1y_2z_3-x_2y_1z_3+x_3y_1z_2-x_1y_3z_2+x_2y_3z_1-x_3y_2z_1$

That may look like a lot of equation, but if look at each individual term, we notice that it is the sum of terms that look like an x component times a y component times a z component. Isolating an arbitrary term if we translate along one axis, we get something similar to our simple 2d example:

$V_t = x_1y_2(z_3+z_t) = V + x_1y_2z_t$

You might say, that is only translating one direction, what happens when you are doing an arbitrary translation? It turns out because of the way the terms are organized in positive and negative pairs, every term besides the one in one directional example cancels out. So our general volume becomes a vector which sums up the terms for each axis. Each axis term is all the pairs of vertex coordinates that don’t include that axis:

$V_x = y_2z_3-y_1z_3+y_1z_2-y_3z_2+y_3z_1-y_2z_1$
$V_y = x_1z_3-x_2z_3+x_3z_2-x_1z_2+x_2z_1-x_3z_1$
$V_z = x_1y_2-x_2y_1+x_3y_1-x_1y_3+x_2y_3-x_3y_2$

Just like our regular volume, we can just add these up for each triangle in our shape. Our final volume calculation becomes:

$V_t(x_t,y_t,z_t) = V+V_xx_t+V_yy_t+V_zz_t$

This is great not only because it allows us to calculate the volume without going through each triangle, but also we don’t even have to know how our shape is oriented! This leads to some strange facts, like if I randomly rotate each of my shapes but put them in the same spot it has the same volume.

# Folium – a new algorithmic jewelry series

Posted: September 30th, 2013 | Author: | Filed under: design, jewelry, nature, simulation, software | 1 Comment »

Folium is a generative jewelry series inspired by the algorithmic structures of plants and algae. Each Folium design is one of a kind, a specimen of a new hypothetical plant species. Free from the constraints of biology and physics, a Folium can exhibit forms and patterns impossible in nature.

Our first generation of Folium pieces is now available for purchase here:
Folium Pendants in stainless steel
Folium Pendants in 24kt gold plated stainless steel
Folium Earrings in stainless steel

This video documents our Folium growth process. (video not showing up? you can watch it here)

### Learning from nature

One of our primary interests at Nervous System, is the systematic exploration of how pattern and form emerge in nature. We’re not interested in merely mimicking nature, instead we try to learn from it, co-opting its strategies of growth. The resulting mathematical models define broader principles that describe the dynamics of many systems.

similar patterns are exhibited by street grids (London), leaf veins, cracking patterns, and river deltas (Lena Delta)

Through code and design, we explore the question of how patterns emerge in nature. How can we use these same rules of growth for design? Digital manufacturing frees us from the rigid uniformity of mass production and nature suggests a new approach to manufacturing that produces diverse results.

### Morphological Diversity

the dissected leaf of Malva moschata

the form of Chondrus crispus seaweed (photo by Andrea Ottesen)

Folium is the result of a multistage digital growth process created by Nervous System based on L-systems and spatial colonization algorithms. Our system yields diverse results both in overall shape and texture. The variably branched forms of the generated Folia range from round to tree-like. Some recall the dissected forms of maple leaves while others can be likened more to the dichotomously branched forms of Chondrus crispus seaweed. Complex network patterns populate the interior of each Folium in several distinct styles that suggest leaf venation, city street grids, braided rivers, or other branched, anastomosed reticulations. The exterior boundaries influence the interior networks as they expand to fill the contours of the space available. Each specimen demonstrates a unique and dynamic interplay between its outer and inner growth systems with the result that no two shapes or patterns are alike.

examples of the range of interior network patterns

examples of the range of exterior shapes

### L-systems + space colonization: simulating plant growth

Our system, written in the open source program environment called Processing, is based on two algorithms developed to model plant forms. The first and oldest is L-systems. L-systems were originally created by botanist Aristid Lindenmayer in 1968 to illustrate the morphology of various plants and algae. They are descriptive rather than emergent systems, meaning they describe what occurs rather than how it occurs. In general, L-systems are used to model recursive branching structures, like those seen in trees. We use a non-deterministic L-system to define the shape of each Folium. Each growth outlines new parameters that vary the detail and shape of a branching skeleton. This skeleton is then skinned with a smooth, organic surface.

###### dichotomously branch ferns like this are easily described by l-systems

The interior network pattern is generated with a more modern algorithm now known as space colonization, which was first developed by Adam Runions of the Algorithmic Botany Group in 2005. The system was originally inspired by the auxin flux canalization theory of leaf venation, but has since been expanded to describe other space-filling, hierarchical structures such as trees. This model starts with a set of attraction points that are distributed throughout space. Growth starts at the root and grows toward the attraction points affecting it, with each attraction point’s impact limited only to its close neighbors. This process of attraction and growth repeats until all space is evenly filled. Our system explores numerous parameters and modifications of this algorithm to generate various and distinct, often unnatural results.

For more information about our work with this algorithm please see this blog post: http://n-e-r-v-o-u-s.com/blog/?p=1218

### About the jewelry

Folia are available as necklaces and earrings. Each piece is photochemically etched from a thin sheet of stainless steel and measures approximately 2 x 2 inches. The necklaces come with 16-18” sterling silver or gold-filled chains, and the earrings hang from hypo-allergenic surgical steel earwires. Since every piece in the collection is one of a kind, each receives its own unique identifying number and is individually photographed.

Folium pendants in 24kt gold plated stainless steel – click here to shop

Folium Earrings in stainless steel – click here to shop

Folium Pendant in stainless steel – click here to shop

# OBJExport library – export color meshes from Processing

Posted: March 13th, 2013 | Author: | Filed under: software | Tags: , | 1 Comment »

We’ve just released a new Processing 2.0 library for exporting OBJ and X3D mesh files. And it supports color meshes! Now you can export color models for 3D-printing with same commands you use to draw. Get started here: OBJExport library page.

The library works like any PGraphics, such as the PDF library. Simply call

createGraphics(width, height, "nervoussystem.obj.OBJExport", filename)

to get the PGraphics for obj export and use regular Processing drawing commands.

I’ve also started to use GitHub to manage the code for this and other projects. You can find the github page for the library here. This page can be used to peruse the source, fork the project or report bugs.

This library is a re-release of an old library I made years ago. I had sort of forgotten about it until I got an email last week from Michael Zoellner of the Interactive Design program at Hof University. Apparently, people are still using it! He wanted to update the library to be compatible with the new Processing release.

At the same time, we’ve started to do some work color 3d printing. However, to get printable models we had to go through some tedious processing MeshLab. So, I took this opportunity to overhaul the library, adding new features like color exporting and fixing some old bugs.

We’re excited to see what the community does with the library. We’re using it to make color 3d-prints, what are you going to do? Try it out and send any feedback to jesse@n-e-r-v-o-u-s.com

# Surface meshing in the browser

Posted: January 30th, 2013 | Author: | Filed under: software | 1 Comment »

precision mediump float;
varying vec3 vTransformedNormal;
varying vec4 vPosition;
uniform vec3 uMaterialDiffuseColor;
void main(void) {
vec3 materialDiffuseColor = uMaterialDiffuseColor;
vec3 ambientLightWeighting = vec3(.2,.2,.2);
vec3 directionalColor = vec3(.8,.8,.8);
vec3 normal = normalize(vTransformedNormal);
vec3 uLightingDirection = vec3(-0.57,-0.57,-0.57);
float diffuseLightBrightness = abs(dot(normal,uLightingDirection));
vec3 diffuseLightWeighting = diffuseLightBrightness*directionalColor;
vec3 r = -reflect(uLightingDirection, normal);
r = normalize(r);
vec3 v = -vPosition.xyz;
v = normalize(v);
vec4 spec = vec4(.5,.5,.5,1.0)*pow(max(0.0,dot(r, v)), 16.0);
gl_FragColor = vec4(ambientLightWeighting * materialDiffuseColor
+ materialDiffuseColor * diffuseLightWeighting,
1.0)+spec;
// gl_FragColor = vec4(0,0,0,1.0);
}

attribute vec3 aVertexPosition;
attribute vec3 aVertexNormal;
uniform mat4 uMVMatrix;
uniform mat4 uPMatrix;
uniform mat3 uNMatrix;
uniform vec3 uAmbientLightingColor;
uniform vec3 uDirectionalDiffuseColor;
varying vec3 vTransformedNormal;
varying vec4 vPosition;
void main(void) {
gl_PointSize = 3.0;
gl_Position = uPMatrix * uMVMatrix * vec4(aVertexPosition, 1.0);
vPosition = uMVMatrix * vec4(aVertexPosition, 1.0);
vTransformedNormal =  aVertexNormal;
}

var canvas;
var mouseX,mouseY,pmouseX,pmouseY,isMouseDown,mouseDragging;
var ctx;
var paused = false;
function init() {
canvas = document.getElementById("test");
depth = canvas.width/8.0;
//ctx = canvas.getContext("2d");
initGL(canvas);
initShaders();
initBuffers();
canvas.addEventListener("mouseup",mouseUpMan,false);
canvas.addEventListener("mousemove",mouseMoveMan,false);
canvas.addEventListener("mousedown",mouseDownMan,false);
canvas.addEventListener("contextmenu",function(event) {event.preventDefault(); event.stopPropagation();},false);
NScamera.distance = 500;
vec3.set(NScamera.center, canvas.width/2, canvas.height/2,20);
canvas.addEventListener("keydown",keyPress,false);
initPts();
initialize();
step();
gl.enable(gl.DEPTH_TEST);
}
var ptGrid;
var density = 10;
var depth = 0;
function initPts() {
pts = new Array();
var weightsX = [Math.random()*80+20,Math.random()*70+15,Math.random()*65+5,Math.random()*60+5,Math.random()*45+5,Math.random()*25+5];
var weightsY = [Math.random()*80+20,Math.random()*70+15,Math.random()*65+5,Math.random()*60+5,Math.random()*45+5,Math.random()*25+5];
var totalWeight = 0;
var len = weightsX.length;
for(var i=0;i< len;++i) {
totalWeight += weightsX[i] + weightsY[i];
}
vec3.set(NScamera.center, canvas.width/2, canvas.height/2,totalWeight/2.0);
var gw = Math.ceil(canvas.width/density);
var gh = Math.ceil(canvas.height/density);
var gd = Math.ceil(totalWeight/density);
var gwh = gw*gh;
ptGrid = new Array(gw*gh*gd);
for(var i=0;i< ptGrid.length ;++i) ptGrid[i] = new Array();
var tries = 0;
var x = 0, y = 0;
var p;
var dx = vec3.create();
var dy = vec3.create();
while(tries < 2000) {
x = Math.random()*canvas.width;
y = Math.random()*canvas.width;
if((x-canvas.width/2.0)*(x-canvas.width/2.0)+(y-canvas.height/2.0)*(y-canvas.height/2.0) < 250*250) {
z = 0;
for(var i=0;i< len ;++i) {
z += (Math.cos(x/canvas.width*Math.PI*(2+i*1.5))+1)*0.5*weightsX[i];
z += (Math.sin(y/canvas.height*Math.PI*(2+i*1.5))+1)*0.5*weightsY[i];
}
var xi=Math.floor(x/density);
var yi=Math.floor(y/density);
var zi=Math.floor(z/density);
var hit = false;
for(var i=Math.max(0,xi-1);i<=Math.min(gw-1,xi+1);++i) {
for(var j=Math.max(0,yi-1);j<=Math.min(gh-1,yi+1);++j) {
for(var k=Math.max(0,zi-1);k<=Math.min(gd-1,zi+1);++k) {
for(var q=0;q < ptGrid[k*gwh+j*gw+i].length;++q) {
p = ptGrid[k*gwh+j*gw+i][q];
if((p.pt[0]-x)*(p.pt[0]-x)+(p.pt[1]-y)*(p.pt[1]-y) < density*density) hit = true;
}
}
}
}
if(!hit) {
tries = 0;
var pt = new Vertex();
vec3.set(pt.pt,x,y,z);//(x*x+y*y)/10000.0
ptPos[pts.length*3] = pt.pt[0];
ptPos[pts.length*3+1] = pt.pt[1];
ptPos[pts.length*3+2] = pt.pt[2];
ptPosBuff.numItems++;
pts.push(pt);
ptGrid[zi*gwh+yi*gw+xi].push(pt);
}
tries++;
}
}
gl.bindBuffer(gl.ARRAY_BUFFER, ptPosBuff);
gl.bufferData(gl.ARRAY_BUFFER, ptPos, gl.STATIC_DRAW);
}
var pivotEdge = null;
function keyPress() {
reset();
}
function reset() {
ptPosBuff.numItems = 0;
initPts();
initialize();
vertexPosBuff.numItems = 0;
}
var normalMatrix = mat3.create();
function draw() {
NScamera.step();
NScamera.velocityY = (.005+NScamera.velocityY*3)/4.0;
NScamera.velocityX = (.003+NScamera.velocityX*3)/4.0;
gl.useProgram(shaderProgram);
gl.viewport(0, 0, gl.viewportWidth, gl.viewportHeight);
gl.clear(gl.COLOR_BUFFER_BIT | gl.DEPTH_BUFFER_BIT);
mat4.perspective(pMatrix,45, gl.viewportWidth / gl.viewportHeight, 0.1, 10000.0);
mat4.identity(mvMatrix);
NScamera.feed(mvMatrix);
mat4.toInverseMat3(mvMatrix,normalMatrix);
mat3.transpose(normalMatrix,normalMatrix);
gl.uniformMatrix4fv(shaderProgram.pMatrixUniform, false, pMatrix);
gl.uniformMatrix4fv(shaderProgram.mvMatrixUniform, false, mvMatrix);
gl.uniformMatrix4fv(shaderProgram.nMatrixUniform, false, normalMatrix);
gl.uniform3f(shaderProgram.materialColorUniform,0.0,0.0,0.0);
gl.enableVertexAttribArray(shaderProgram.vertexPositionAttribute);
gl.enableVertexAttribArray(shaderProgram.vertexNormalAttribute);
gl.bindBuffer(gl.ARRAY_BUFFER, ptPosBuff);
gl.vertexAttribPointer(shaderProgram.vertexPositionAttribute, ptPosBuff.itemSize, gl.FLOAT, false, 0, 0);
gl.bindBuffer(gl.ARRAY_BUFFER, vertexNormBuff);
gl.vertexAttribPointer(shaderProgram.vertexNormalAttribute, vertexNormBuff.itemSize, gl.FLOAT, false, 0, 0);
gl.drawArrays(gl.POINTS,0,ptPosBuff.numItems);
var norm = vec3.create();
for(var i=vertexPosBuff.numItems/3;i < triangles.length;++i) {
planeNormal(norm,triangles[i][0].pt,triangles[i][1].pt,triangles[i][2].pt);
vec3.normalize(norm,norm);
for(var j=0;j < 3;++j) {
vertexPos[vertexPosBuff.numItems*3] = triangles[i][j].pt[0];
vertexPos[vertexPosBuff.numItems*3+1] = triangles[i][j].pt[1];
vertexPos[vertexPosBuff.numItems*3+2] = triangles[i][j].pt[2];
vertexNorm[vertexPosBuff.numItems*3] = norm[0];
vertexNorm[vertexPosBuff.numItems*3+1] = norm[1];
vertexNorm[vertexPosBuff.numItems*3+2] = norm[2];
vertexPosBuff.numItems++;
}
}
gl.bindBuffer(gl.ARRAY_BUFFER, vertexPosBuff);
gl.bufferData(gl.ARRAY_BUFFER, vertexPos, gl.DYNAMIC_DRAW);
gl.vertexAttribPointer(shaderProgram.vertexPositionAttribute, vertexPosBuff.itemSize, gl.FLOAT, false, 0, 0);
gl.bindBuffer(gl.ARRAY_BUFFER, vertexNormBuff);
gl.bufferData(gl.ARRAY_BUFFER, vertexNorm, gl.DYNAMIC_DRAW);
gl.vertexAttribPointer(shaderProgram.vertexNormalAttribute, vertexNormBuff.itemSize, gl.FLOAT, false, 0, 0);
gl.uniform3f(shaderProgram.materialColorUniform,0.2,1.0,1.0);
gl.drawArrays(gl.TRIANGLES,0,vertexPosBuff.numItems);
}
(function() {
var requestAnimationFrame = window.requestAnimationFrame || window.mozRequestAnimationFrame || window.webkitRequestAnimationFrame;
window.requestAnimationFrame = requestAnimationFrame;
}) ();
function step() {
window.requestAnimationFrame(step);
//just one boundary for now
if(!paused) {
for(var i=0;i < boundaries.length;++i) {
var bound = boundaries[i].e;
var curr = bound.next;
var dumb = curr.active == 0;
if(curr == bound) dumb = false;
while(dumb ) {
curr = curr.next;
dumb = curr.active == 0;
if(curr == bound) dumb = false;
}
if(curr.active == 1) {
pivot(curr);
pivotEdge = curr;
}
}
}
draw();
}
var vertexPos;
var vertexPosLen = 0;
var vertexNorm;
var ptPos;
var ptPosBuff;
var vertexPosBuff;
var vertexNormBuff;
function initBuffers() {
ptPos = new Float32Array(3*10000);
ptPosBuff = gl.createBuffer();
vertexPos = new Float32Array(3*3*10000);
vertexPosBuff = gl.createBuffer();
vertexNorm = new Float32Array(3*3*10000);
vertexNormBuff = gl.createBuffer();
gl.bindBuffer(gl.ARRAY_BUFFER, vertexPosBuff);
gl.bufferData(gl.ARRAY_BUFFER, vertexPos, gl.DYNAMIC_DRAW);
vertexPosBuff.itemSize = 3;
vertexPosBuff.numItems = 0;
gl.bindBuffer(gl.ARRAY_BUFFER, vertexNormBuff);
gl.bufferData(gl.ARRAY_BUFFER, vertexNorm, gl.DYNAMIC_DRAW);
vertexNormBuff.itemSize = 3;
vertexNormBuff.numItems = 0;
gl.bindBuffer(gl.ARRAY_BUFFER, ptPosBuff);
gl.bufferData(gl.ARRAY_BUFFER, ptPos, gl.DYNAMIC_DRAW);
ptPosBuff.itemSize = 3;
ptPosBuff.numItems = 0;
}
var shaderProgram;
function initShaders() {
fragmentShader = getShader(gl, "shader-fs");
vertexShader = getShader(gl, "shader-vs");
shaderProgram = gl.createProgram();
gl.attachShader(shaderProgram, vertexShader);
gl.attachShader(shaderProgram, fragmentShader);
gl.linkProgram(shaderProgram);
if (!gl.getProgramParameter(shaderProgram, gl.LINK_STATUS)) {
alert("Could not initialise shaders");
}
gl.useProgram(shaderProgram);
shaderProgram.vertexPositionAttribute = gl.getAttribLocation(shaderProgram, "aVertexPosition");
//gl.enableVertexAttribArray(shaderProgram.vertexPositionAttribute);
shaderProgram.vertexNormalAttribute = gl.getAttribLocation(shaderProgram, "aVertexNormal");
//gl.enableVertexAttribArray(shaderProgram.vertexNormalAttribute);
shaderProgram.pMatrixUniform = gl.getUniformLocation(shaderProgram, "uPMatrix");
shaderProgram.mvMatrixUniform = gl.getUniformLocation(shaderProgram, "uMVMatrix");
shaderProgram.nMatrixUniform = gl.getUniformLocation(shaderProgram, "uNMatrix");
shaderProgram.materialColorUniform = gl.getUniformLocation(shaderProgram, "uMaterialDiffuseColor");
}
function mouseDraggedMan(event) {
NScamera.mouseDragged(mouseX-pmouseX,-(mouseY-pmouseY), mouseX, mouseY,mouseButton);
}
function mouseUpMan(event) {
isMouseDown = false;
}
function mouseMoveMan(event) {
pmouseX = mouseX;
pmouseY = mouseY;
mouseX = event.clientX;
mouseY = event.clientY;
if(isMouseDown) {
mouseDragging = true;
mouseDraggedMan(event);
}
}
function mouseDownMan(event) {
isMouseDown = true;
mouseDragging = false;
mouseButton = event.which;
}
mat4.toInverseMat3=function(a,b){var c=a[0],d=a[1],e=a[2],g=a[4],f=a[5],h=a[6],i=a[8],j=a[9],k=a[10],l=k*f-h*j,o=-k*g+h*i,m=j*g-f*i,n=c*l+d*o+e*m;if(!n)return null;n=1/n;b||(b=mat3.create());b[0]=l*n;b[1]=(-k*d+e*j)*n;b[2]=(h*d-e*f)*n;b[3]=o*n;b[4]=(k*c-e*i)*n;b[5]=(-h*c+e*g)*n;b[6]=m*n;b[7]=(-j*c+d*i)*n;b[8]=(f*c-d*g)*n;return b};

init();



A common problem we come across is needing to make meshes of complex surfaces. This can be done with many different approaches, but regardless of the method, it is often quite difficult. Especially when doing simulations on surfaces, you need a well-behaved triangle mesh, that is one with a controllable density of elements and nicely shaped triangles. This can be done by generating a set of points on that surface and then triangulating them. But triangulating surfaces embedded in 3-space is no simple task. Recently, I came across an interesting approach to this problem which works especially well with generated point clouds called the Ball-Pivot Algorithm. I implemented it in javascript (you should see it running above).

Let’s start with the basic idea of triangulation in 2D. A triangulation of a set of points is just a set of triangles that covers all those points and don’t overlap.

Now not all triangulations are created equal. We want to make triangulations whose triangles are nicely shaped, as close to equilateral as possible. This leads to better numerical stability and convergence in simulations.

This leads to something called the Delaunay triangulation. This is a really neat triangulation defined such that circumcircle of any triangle in the triangulation does not contain any other points. The circumcircle is the smallest circle enclosing the triangle. One thing that’s not entirely obvious is that this triangulation always exists. And the thing that is great about this triangulation is that it maximizes the minimum angle of the triangles. This means it gives us the best shaped triangles we can get with the points we have.

The problem comes when we make the jump from 2D to 3D. We can generalize the idea of Delaunay triangulations to 3D, giving us tetrahedron instead of triangles and circumspheres instead of circumcircles. However, what we want is surface of triangles embedded in 3-space, not a 3 dimensional solid. One way to get around this is called alpha shapes. This is a subset of the 3D Delaunay triangulation but only including faces of a certain size. Alpha shapes can create good surface triangulations; however, it is necessary to compute the full 3D Delaunay, which is complex in computation and implementation as well as numerically unstable.

So in comes the Ball-Pivot algorithm. This algorithm is nice because it works in linear time for points of a constant density, and it is exceptionally cute conceptually. The idea is you have a ball of a certain radius sitting in a triangle. This ball rolls over the edge of the triangle and keeps rolling until it hits another point. Then, that edge and the point it hits make a new triangle which is added to the triangulation, and the ball rolls over a new edge. If you start with a good triangle, such that the ball contains no other points, then the new triangle the ball rolls onto won’t contain any other points either because any point would have been hit before hand. This maintains the Delaunay property! In fact, the ball pivot algorithm produces a subset of an alpha shape.

There are a few caveats. If the curvature and density of the surface points is too high, a larger ball won’t be able to fit in the necessary crevices, missing some points. If the ball is too small, it can fall through large triangles and leave holes. While these issues can worked around, the ball pivot algorithm works best for surfaces with a consistent density of points.

Feel free to look at the code inside, but it is not optimized and not pretty. This was thrown together in 2-3 days, so it definitely lacks polish.

# Puzzle pattern repair and optimization

Posted: January 23rd, 2013 | Author: | Filed under: puzzles, software | 1 Comment »

This week I revamped the automatic post-processing of puzzle files. Previously, there was a lot of manual work that had to be done to the file before they were ready to be laser cut. Now, I’ve reduced a lot of that labor and also made it so that the puzzle pieces will be generated with much more consistent details and nubs.

The problems

There are a number of problems that have to be found and fixed by hand. Many of these issues occur around whimsies. Some pieces are generated too small to be a usable piece. Other times pieces get merged during the simulation. This occurs when two pieces which have the same phase grow into each other during the simulation. This can occur because the simulation employs an optimization of only having 5 phases rather than each piece being its own phase. Some areas are just too thin.

Fixing overview

Fixing occurs in two stages. The first part occurs in pixels space before the boundaries have been extracted. This part hasn’t changed. The second stage works in vector space, once everything is represented as polylines. To get from the first stage to the second stage we used a modified marching squares algorithm.

The difficulty

What makes the fixing tricky is that conditions that define what needs to be fixed and how are highly non-local. I cannot simply say nothing can be under X thickness. The more material a connection holds, the thicker it needs to be. Thin areas between two merged pieces need to be split, not thickened. Even thickness is a bit tricky to calculate. Additionally, there can be conflicts where thickening one area makes another too thin.

Marching squares

To explain the fixing procedure in detail, I’ll start at the beginning with the creation of the vector representation. The pixel representation is turned into vectors with a generalized marching squares algorithm that allows for an arbitrary number of states. During this process each point only knows which points it is connected to. There is no notion of pieces having any continuity.

Computing thickness

To compute the thickness of each point, I find the distance to the nearest non-degenerate point. I don’t bother to be concerned with distance to a line segment because the curves are so dense. By non-degenerate point, I mean that points that are directly next to another should not be considered for thickness computation. So for each point I compute a local neighborhood that extends out a certain distance using a depth first search of the neighbors. The points in this neighborhood are considered degenerate.

Minimum thickness computation

To compute how thick a connection needs to be, we need to know how much material it supports. This is estimated by the linear distance along the piece from one end of the connection to the other, rather than trying to compute the actual area. We could compute this distance with a breathe first search, but that would be expensive O(n^2).

Instead, we note that two nearest points must lie within the same piece. This way if we extract each puzzle piece and label each point in the piece with the distance along it starting at an arbitrary point, we can do a simple calculation to get the distance between any two points. It is constant time.

Extracting the pieces themselves isn’t entirely trivial. This equates to finding all minimum cycles in a graph. However, we have a clue that makes this a bit easier. Each point knows the colors of the pixels that is was generated in between, and we can tag them with this information. Because of this if you trace along an arbitrary point’s neighbors that all have the same color, you get a piece. This creates a simple linear time procedure for extracting all pieces.

Thickening

The thin regions are thickened by simply moving a point away from the nearest point, such that it is the minimum distance calculated from the material supported. All points in the local neighborhood of moved point are then marked.

It should be noted that special consideration needs to be paid to boundary pieces, which need to treat the edge of the puzzle slightly differently.

Selective smoothing

All points that have been marked by the thickening process are then smoothed with a Laplacian smoothing procedure. This alleviates any strange geometry that might arise from movement.

These processes of marking, thickening, and smoothing are repeated in a relaxation procedure that resolving conflicting situations when thickening one area overly thins another.

Small and merged pieces

All of these procedures only deal with thin regions. So far, there are no automatic fixes for small and merged pieces. Instead these are marked on export for manual fixing. Merged pieces are heuristically identified simply by their size, which can lead to false positives, but that is easily distinguished by a manual operator. With the pieces already extracted, marking small are large pieces is trivial.

Results

Here are the resulting files. The two marked “extract and mark” are identical and show the output of marking the pieces before they are fixed. Click on them for a larger view.

# Radiolaria – our latest web-based design app is now available!

Posted: November 29th, 2012 | Author: | Filed under: design, housewares, jewelry, software | No Comments »

Radiolaria, our latest web-based design app is now available! Radiolaria lets you manipulate a web of connected cells to create a huge variety of biologically-inspired patterns.

Each object you create starts as a basic hexagonal mesh which you can change as much, or as little, as you want with a variety of tools. Choose a sharp, geometric look or a rounded, more organic style. Use attractive and repulsive forces to disrupt the pattern’s initial symmetry, or give it a twist with spiraling forces. Click inside any cell to subdivide it into three smaller cells — those smaller cells can even be further subdivided to add more intricate detail to your design.

Your digital designs can be turned into real-world jewelry, housewares, or decorations made from steel, bamboo, or felt. Play around! And make sure to save your creations so you can revisit and share them, digitally or physically. Get started at n-e-r-v-o-u-s.com/radiolaria/

Back story
We released the first version of our Radiolaria software in 2007 as a Processing applet on our website. The tool was our first effort to engage others in our design process: a way of sharing the cool tools we were creating for ourselves to design with visitors to our website. The applet functioned more as proof of concept than anything else, as few people ever used it to purchase designs. Our in house version of the software has continued to develop over the past 5 years and today we bring the latest version to the web. I feel like we’ve “grown up” a lot since we released the original app and the internet has too. This version of Radiolaria is the most feature rich, user friendly and powerful of any we’ve made so far…and it runs entirely in the browser! Making something like this wouldn’t have been possible for us or the internet 5 years ago.

What can you make?

lasercut bamboo material samples

You can make 4 types of product with Radiolaria: earrings, necklaces, trivets/coasters, and art objects. In 4 materials: stainless steel, 24kt gold plated stainless steel, bamboo plywood, black wool felt. Earrings and necklaces can be made in the stainless steel and gold only. Trivets and coasters can be made in bamboo plywood and wool felt only. Art object is our catch all term for making an abstract thing. You can make your abstract thing any material you like.

stainless steel material samples

The materials we’ve chosen for this app are eco-friendly. Bamboo plywood and wool felt are both beautiful renewable materials made from natural fibers. And our stainless steel is composed of 60% recycled steel and is 100% recyclable.

We’re also working on some larger scale options for what you can make with the app…including furniture! Hopefully, we’ll have that integrated soon.

How long does it take to make?
The bamboo and wool designs are made in our studio in Somerville, MA and take 1 week to make. The stainless steel and gold plated designs take us 3 weeks to make. They are photochemically etched in Minnesota.

Radiolaria – what are they?

Radiolaria drawings by Ernst Haeckel
Radiolaria are microscopic single celled organisms that live in the ocean. Each radiolarian builds a unique skeleton of silica that extends from and surrounds its cell membrane. The form of these skeletons varies from species to species but they are generally composed of different scales of pores and spines. The skeleton provides protection while it floats freely through the ocean and also serves as an armature for the extension of tentacle-like pseudopods which collect food. These skeletons, called ‘tests’, have many fascinating properties: they are extremely beautiful 3-dimensional structures, they use minimal material to enclose a large volume, and they are synthesized and “printed” by a unicellular organism.

Our Radiolaria app attempts to mimic the patterns seen in radiolarian structures by creating a deforming mesh of hexagons. The deformations are controlled by the user by physical simulation: each line acts as a spring, pulling neighboring lines. You can add forces and subdivide cells to sculpt the mesh.

What are you making?
Here are some designs we’ve been making with the app. We want to see what you will make, tweet your designs @nervous_system!

Postscript
Did you think I was kidding when I said the original apps were kind of terrible. I wasn’t. Have a look

# Loading 3D models in WebGL

Posted: November 5th, 2012 | Author: | Filed under: software, work in progress | 3 Comments »

For a new app we’re working on, we finally came upon the task of loading external 3D geometry. I wanted to store and load meshes with as little bandwidth and processing as possible. I’m sure this has been done before, but here is how I approached it.

Rather than loading a mesh in a common file format (eg stl or obj), processing it, and putting into arrays that I could send to a gl buffer, I wanted to load binary data that could go directly to the GPU. So I created a binary representation of a mesh that exactly mirrors the data I would send to an array buffer. It is a list of 32-bit floats representing the vertex data (6 for each vertex with position and normals) followed by a list of 16-bit integers representing triangle indices. Obviously, this is not a very general file format, but it is exactly the data I need for the shaders I was using. The only additional data is two integers at the start of the file representing the number of vertices and faces.

This data can be directly fetched with an http request as an ArrayBuffer object. This ArrayBuffer can be accessed by creating a Float32Array for the vertex data and Uint16Array for the index data. I don’t even have to allocate new storage. Both the vertex and index arrays use the same ArrayBuffer, just with different offsets.

This is surprisingly simple, and the hard work really comes in encoding the mesh data to begin with. This was in fact extra tricky because I had to deal with endian issues. Endian refers to the order in which the bytes of a number are read. Little-endian means the least significant byte comes first, and big-endian means the most significant byte comes first. When you create a TypedArray from an ArrayBuffer, it just sees a list of bytes and it doesn’t know what byte order they are in. Javascript assumes it is the same as the underlying system architecture, which can vary. It turns out the majority of common systems (x86, x86-64, IOS) are all little-endian. So I generally feel I can ignore big-endian systems and just make my files little-endian. However, Java, with which I was generating the meshes, creates big-endian numbers by default. This caused me much grief for a few hours, as everything was working fine except for the fact that my numbers were mysteriously gibberish.

Below you can find the code I used for encoding and decoding the meshes. The AJAX portion is largely copied from an online example.


precision highp float;

varying vec4 vPosition;
varying vec3 vNormal;

void main(void) {
vec3 normal = vNormal;

vec3 lightDir = normalize(vec3(1,1,1));
float brightness = (dot(lightDir,normal)+1.0)/2.0; //-1,1.5
gl_FragColor = vec4(brightness,brightness, brightness,1);
}

attribute vec3 aVertexPosition;
attribute vec3 aVertexNormal;

uniform mat4 uMVMatrix;
uniform mat4 uPMatrix;
uniform mat3 uNMatrix;

varying vec4 vPosition;
varying vec3 vNormal;
void main(void) {
gl_Position = uPMatrix * uMVMatrix * vec4(aVertexPosition, 1.0);
vPosition = uMVMatrix * vec4(aVertexPosition, 1.0);//uMVMatrix *
vNormal = uNMatrix * normalize(aVertexNormal);
}

var loadvboExample = (function () {
var gl;
var vbo = {};
var shaderProgram;
var rot = 0;
var mvMatrix = mat4.create();
var tempMatrix2 = mat4.create();
var tempMatrix = mat4.create();
var pMatrix = mat4.create();
var nMatrix = mat3.create();

window.requestAnimFrame = (function(){
return  window.requestAnimationFrame       ||
window.webkitRequestAnimationFrame ||
window.mozRequestAnimationFrame    ||
window.oRequestAnimationFrame      ||
window.msRequestAnimationFrame     ||
function( callback ){
window.setTimeout(callback, 1000 / 60);
};
})();

var init = function() {
initGL(document.getElementById('loadvbo'));
if(gl) {
gl.enable(gl.DEPTH_TEST);
var fragmentShader, vertexShader;
fragmentShader = getShader(gl, "shader-fs");
vertexShader = getShader(gl, "shader-vs");

shaderProgram= gl.createProgram();
gl.attachShader(shaderProgram, vertexShader);
gl.attachShader(shaderProgram, fragmentShader);
gl.linkProgram(shaderProgram);

if (!gl.getProgramParameter(shaderProgram, gl.LINK_STATUS)) {
alert("Could not initialise shaders");
}

shaderProgram.vertexPositionAttribute = gl.getAttribLocation(shaderProgram, "aVertexPosition");
shaderProgram.vertexNormalAttribute = gl.getAttribLocation(shaderProgram, "aVertexNormal");

shaderProgram.pMatrixUniform = gl.getUniformLocation(shaderProgram, "uPMatrix");
shaderProgram.mvMatrixUniform = gl.getUniformLocation(shaderProgram, "uMVMatrix");
shaderProgram.nMatrixUniform = gl.getUniformLocation(shaderProgram, "uNMatrix");

loadVBO("wp-content/uploads/interstice_s.vbo",vbo,gl);
anim();
}
};

function anim() {
requestAnimFrame(anim,60);
drawGL();
}

mat4.toInverseMat3=function(a,b){var c=a[0],d=a[1],e=a[2],g=a[4],f=a[5],h=a[6],i=a[8],j=a[9],k=a[10],l=k*f-h*j,o=-k*g+h*i,m=j*g-f*i,n=c*l+d*o+e*m;if(!n)return null;n=1/n;b||(b=mat3.create());b[0]=l*n;b[1]=(-k*d+e*j)*n;b[2]=(h*d-e*f)*n;b[3]=o*n;b[4]=(k*c-e*i)*n;b[5]=(-h*c+e*g)*n;b[6]=m*n;b[7]=(-j*c+d*i)*n;b[8]=(f*c-d*g)*n;return b};

function drawGL() {
gl.viewport(0, 0, gl.viewportWidth, gl.viewportHeight);
gl.clear(gl.COLOR_BUFFER_BIT | gl.DEPTH_BUFFER_BIT);
gl.useProgram(shaderProgram);
rot += Math.PI/300.0;
mat4.perspective(pMatrix,45, gl.viewportWidth / gl.viewportHeight, 0.1, 300.0);
mat4.identity(mvMatrix);
mat4.rotateY(mvMatrix, mvMatrix,rot);

mat4.multiply(tempMatrix,mat4.lookAt(tempMatrix2,[0,0,150],[0,0,0],[0,-1,0]),mvMatrix);
var temp = mvMatrix;
mvMatrix = tempMatrix;
tempMatrix = temp;
mat4.toInverseMat3(mvMatrix, nMatrix);
mat3.transpose(nMatrix,nMatrix);

gl.enableVertexAttribArray(shaderProgram.vertexPositionAttribute);
gl.enableVertexAttribArray(shaderProgram.vertexNormalAttribute);
gl.bindBuffer(gl.ARRAY_BUFFER, vbo.vertexBuffer);
gl.vertexAttribPointer(shaderProgram.vertexPositionAttribute, 3, gl.FLOAT, false,24,0);
gl.vertexAttribPointer(shaderProgram.vertexNormalAttribute, 3, gl.FLOAT, false,24,12);
gl.uniformMatrix4fv(shaderProgram.pMatrixUniform, false, pMatrix);
gl.uniformMatrix4fv(shaderProgram.mvMatrixUniform, false, mvMatrix);
gl.uniformMatrix3fv(shaderProgram.nMatrixUniform, false, nMatrix);

gl.bindBuffer(gl.ELEMENT_ARRAY_BUFFER, vbo.indexBuffer);
gl.drawElements(gl.TRIANGLES,vbo.numFaces*3, gl.UNSIGNED_SHORT,0);

}

function initGL(canvas) {
try {
gl = canvas.getContext("experimental-webgl"); //preserve drawing buffer
gl.viewportWidth = canvas.width;
gl.viewportHeight = canvas.height;
} catch (e) {
}
if (!gl) {
canvas.display = "none";
}
}

function getShader(gl, id) {
var shaderScript = document.getElementById(id);
if (!shaderScript) {
return null;
}

var str = "";
var k = shaderScript.firstChild;
while (k) {
if (k.nodeType == 3) {
str += k.textContent;
}
k = k.nextSibling;
}

var shader;
if (shaderScript.type == "x-shader/x-fragment") {
shader = gl.createShader(gl.FRAGMENT_SHADER);
} else if (shaderScript.type == "x-shader/x-vertex") {
shader = gl.createShader(gl.VERTEX_SHADER);
} else {
return null;
}

gl.shaderSource(shader, str);
gl.compileShader(shader);

if (!gl.getShaderParameter(shader, gl.COMPILE_STATUS)) {
alert(gl.getShaderInfoLog(shader));
return null;
}

return shader;
}
return init;
}

)();

loadvboExample();


Processing code for encoding an obj in binary

String folder = "SOMEFOLDER";
String file = "filename.obj";

ArrayList<PVector> srf = new ArrayList<PVector>();
ArrayList<PVector> norm = new ArrayList<PVector>();
ArrayList<int[]> faces = new ArrayList<int[]>();

void setup() {
noLoop();
}

void draw() {
loadSrf(folder+file);
writeSrf(folder+file.substring(0,file.length()-3)+"vbo");
exit();
}

void loadSrf(String name) {
srf.clear();
norm.clear();
faces.clear();
println("LOAD " + name);
String[] lines = loadStrings(name);
for(int i=0;i<lines.length;++i) {
if(lines[i].length() > 2) {
//vertices
if (lines[i].substring(0,2).equals("v ")) {
float info[] = float(split(lines[i], " "));
srf.add(new PVector(info[1],info[2],info[3]));
}
//normals
else if(lines[i].substring(0,2).equals("vn")) {
float info[] = float(split(lines[i], " "));
norm.add(new PVector(info[1],info[2],info[3]));
}
else if(lines[i].substring(0,2).equals("f ")) {
String info[] = split(lines[i], " ");
//sometimes obj's have different indices for different properties separated by '//'
int info1[] = int(split(info[1],"//"));
int info2[] = int(split(info[2],"//"));
int info3[] = int(split(info[3],"//"));
//obj has 1 based indexing and we need 0 based indexing
faces.add(new int[] {info1[0]-1,info2[0]-1,info3[0]-1});
}
}
}
}

//write the data in binary
void writeSrf(String name) {
try {
DataOutputStream dos = new DataOutputStream(new FileOutputStream(name));
dos.writeInt(srf.size());
dos.writeInt(faces.size());
PVector v;
for(int i=0;i<srf.size();++i) {
v = srf.get(i);
writeFloat(dos,v.x);
writeFloat(dos,v.y);
writeFloat(dos,v.z);
v = norm.get(i);
writeFloat(dos,v.x);
writeFloat(dos,v.y);
writeFloat(dos,v.z);
}
int[] f;
for(int i=0;i<faces.size();++i) {
f = faces.get(i);
writeShort(dos,f[0]);
writeShort(dos,f[1]);
writeShort(dos,f[2]);
}
dos.close();
} catch(Exception e) {
}
}

//write a float value little endian
void writeFloat(DataOutputStream dos, float f) throws IOException {
int i = Float.floatToIntBits(f);
dos.writeByte(i & 0xFF);
dos.writeByte((i >> 8) & 0xFF);
dos.writeByte((i >> 16) & 0xFF);
dos.writeByte((i >> 24) & 0xFF);
}

//write a 16-bit integer little endian, integers larger than 65535 will get truncated
void writeShort(DataOutputStream dos, int i) throws IOException {
dos.writeByte(i & 0xFF);
dos.writeByte((i >> 8) & 0xFF);
}


javascript file for loading meshes from a file

//load mesh object
/*
bytes 0-3 = number of vertices
bytes 4-7 = number of faces
vertex = 6x4 bytes position followed by normal
faces = 3x2 bytes as unsigned 16 bit indices
*/

function loadVBO(url, vbo) {
var xhr = new XMLHttpRequest();

xhr.onreadystatechange = function () {
if (xhr.readyState == xhr.DONE) {
if (xhr.status == 200 && xhr.response) {
loadBuffers(xhr.response,vbo);
} else {
console.log("Failed to download:" + xhr.status + " " + xhr.statusText);
}
}
}
// Open the request for the provided url
xhr.open("GET", url, true);
// Set the responseType to 'arraybuffer' for ArrayBuffer response
xhr.responseType = "arraybuffer";
xhr.send();
}

//read ArrayBuffer into gl buffers
function loadBuffers(buffer, vbo) {
var reader = new DataView(buffer);
//get number of vertices and faces
var numVertices = reader.getUint32(0);
var numFaces = reader.getUint32(4);
vbo.numVertices = numVertices;
vbo.numFaces = numFaces;
//put that data in some arrays
vbo.vertexData = new Float32Array(buffer,8,numVertices*6);
vbo.indexData = new Uint16Array(buffer, numVertices*24+8, numFaces*3);
//push that data to the GPU
vbo.vertexBuffer = gl.createBuffer();
gl.bindBuffer(gl.ARRAY_BUFFER, vbo.vertexBuffer);
gl.bufferData(gl.ARRAY_BUFFER, vbo.vertexData, gl.STATIC_DRAW);

vbo.indexBuffer = gl.createBuffer();
gl.bindBuffer(gl.ELEMENT_ARRAY_BUFFER, vbo.indexBuffer);
gl.bufferData(gl.ELEMENT_ARRAY_BUFFER, vbo.indexData, gl.STATIC_DRAW);
}


And to draw the data

function draw() {
//... some gl drawing stuff up here

gl.bindBuffer(gl.ARRAY_BUFFER, vbo.vertexBuffer);
gl.vertexAttribPointer(shaderProgram.vertexPositionAttribute, 3, gl.FLOAT, false,24,0);
gl.vertexAttribPointer(shaderProgram.vertexNormalAttribute, 3, gl.FLOAT, false,24,12);

gl.bindBuffer(gl.ELEMENT_ARRAY_BUFFER, vbo.indexBuffer);
gl.drawElements(gl.TRIANGLES, vbo.numFaces*3, gl.UNSIGNED_SHORT, 0);
}


# Cell Cycle – technical details

Posted: March 12th, 2012 | Author: | Filed under: software, thoughts | Tags: , , , | No Comments »

For a while now, we’ve been wanting to switch our online apps from Java applets to HTML5. Applets are simply an outdated technology with a much clunkier, inelegant user experience. Finally, in the last two weeks, we’ve had the time to dig in and start porting our apps. Not only is the technology superior for developing HTML5 apps, but we’ve also learned a lot since we first released customization apps in 2007.  We decided to tackle the most complex one first, the Cell Cycle app. In this post, I’ll go over some of the tech we used and some of the hurdles we encountered.

### The Tech

The first thing we had to decide was how we wanted to go about translating our app from Processing, our development environment of choice, to Javascript. I considered rewriting the entire thing from scratch as a kind of JS learning exercise, but decided it was not worth while. The next obvious choice was to use the amazing ProcessingJS, which allows you to run pure Processing code directly in the browser. However, the 3D capabilities of PJS are somewhat limited, so I spent some time looking at other frameworks people have developed around WebGL, such as Mr. Doob’s Three.js. Despite there being some great frameworks out there, I decided that the options were too heavy handed and required too much specialization. The great thing about Processing is how subtle it is. It does exactly what you think it should without extraneous syntax getting in the way.

### ProcessingJS

Ultimately, we went with a hybrid of the first two options. We used PJS so that we could reuse the core logic of the original Processing app, but we used pure JS and webGL for drawing. One of the amazing things about PJS is you can mix Processing code and Javascript directly. This allowed us to write JS drawing functions and simply replace the drawing parts of our Processing code. There is an additional benefit/detriment to this ability of PJS. You cannot only call JS functions from Processing, but can combine the object models of Java and Javascript. You have the clean and compact class definitions of Java. But you have the mutability of Javascript that avoids the “over-objectification” that is common in Java. I find this a really refreshing way of coding; however, to the casual user the mixing of Java and Javascript may end up looking like gibberish.

### No Libraries

One of the unfortunate aspects of PJS is cannot directly leverage the great community that has grown up around Processing and developed so many powerful libraries to extend Processing. There is no easy way to use Processing libraries in PJS, and in many cases, it is non-trivial to port libraries to PJS. So the first thing we had to do was remove any library dependencies from our code, which included controlP5 and PeasyCam. We replaced these with our own code. This wasn’t so bad, since the app needed a UI overhaul anyway.

the old Java version

### Features

The original Java version of the Cell Cycle app had a number of problems. The interface was cluttered; it was buggy; you couldn’t save and share models; you couldn’t subdivide cells on the inside of the bracelet; etc. We took this rewriting as an opportunity to fix and add a bunch of features. Originally, there were three different “view modes”: 3D, smooth, and 2D. We combined all of these into one. The model is always smooth (more on this next) and with the extra screen real estate of the browser, we can show 2D and 3D views simultaneously.

The new version autosizes! Previously there was a “radius”, which corresponds a parameter in the mesh generation but has no relation to the user. It doesn’t correspond to the inner radius or outer radius of the final piece. We updated this version so the user specifies the interior diameter of the final piece. The code then iteratively resizes the piece until the interior dimension is approximately correct. This way a user can specify a size and no matter what they do to the model afterwards, it remains the same size.

Permalinking! It is practically necessary to have permalinks to user generated content on the web these days. If you can’t tweet something or post it to facebook, it might as well not exist. Not only can you link to a model now, but you (or anyone) can continue editing. Instead of saving the mesh that gets 3D printed to our server, we save an abstract representation of the current model state. The actual 3D mesh is reconstructed upon loading. Not only does this allow geometry to be smartly reloaded for editing, but also makes the models much lighter. This saves space, bandwidth, and load times.

### Optimization

The biggest amount of development we had to do was optimization. Things have to run fast in the browser. The majority of processing power goes towards generating the mesh and passing that info to the GPU with webGL. Displaying large, static models using openGL is easy. You load up a model into a VBO once, and that’s it. You can display millions of triangles at really fast frame rates. All the work is done in a preprocess, so you don’t have to worry optimization. However, when you have a large, dynamic mesh that is constantly changing, optimization becomes important.

One thing we decided is it couldn’t be completely smooth all the time. The meshes are smoothed through Catmull-Clark subdivision. The models that we send to the printer have two subdivision steps preformed on them. This is simply too much computation to be done in real time.  Instead, while the model is moving we only perform one subdivision. When the model settles and stops, which happens pretty rapidly, we perform a second subdivision and store that in a VBO.  Until it moves again, we do not update the VBO.

But this in and of itself was insufficient using the naive drawing methods that we had previously. The meshes are represented using a Half edge data structure. This stores connectivity information of our mesh faces in a compact way. The simple approach to drawing the mesh is to loop through all the vertices and faces of a mesh, dump that info into arrays that gets passed to the GPU. The problem is that regular JS objects and arrays are not particularly performance oriented. Simply looping a large array of objects and performing basic operations is a significant CPU drain. The key to getting around this is doing as much as possible in fixed-type arrays that were introduced into Javascript specifically for webGL. These include Uint16Array, Float32Array, etc. In my experience, these arrays are far more efficient than generic arrays. Instead of subdividing and creating a new Half edge mesh, the subdivision data goes directly into fixed-type vertex and index arrays that can be passed to webGL. Additionally, other computations, such as surface normals and volume are performed directly out of these buffer arrays. This was ultimately key to  getting interactive framerates.