# Kinematics

Posted: November 26th, 2013 | Author: Jesse Louis-Rosenberg | Filed under: 3dprinting, clothing, design, jewelry, simulation, software | Tags: | 8 Comments »

Kinematics is a system for 4D printing that creates complex, foldable forms composed of articulated modules. The system provides a way to turn any three-dimensional shape into a flexible structure using 3D printing. Kinematics combines computational geometry techniques with rigid body physics and customization. Practically, Kinematics allows us to take large objects and compress them down for 3D printing through simulation. It also enables the production of intricately patterned wearables that conform flexibly to the body.

Today we are releasing a jewelry collection and an accompanying customization app built upon our Kinematics concept. We’re also releasing a free to use app for desktop 3D printers.

Kinematics is a branch of mechanics that describes the motion of objects, often described as the “geometry of motion.” We use the term Kinematics to allude to the core of the project, the use of simulation to model the movement of complex assemblages of jointed parts.

Kinematics produces designs composed of 10’s to 1000’s of unique components that interlock to construct dynamic, mechanical structures. Each component is rigid, but in aggregate they behave as a continuous fabric. Though made of many distinct pieces, these designs require no assembly. Instead the hinge mechanisms are 3D printed in-place and work straight out of the machine.

This project evolved out of a collaboration with Motorola’s Advanced Technology and Projects group which challenged us to create in-person customization experiences for low cost 3D printers. The genesis of the project is discussed at length in The Making of Kinematics.

### a tale of two apps

We are releasing two web-based applications: Kinematics and a simplified version called Kinematics @ Home which is completely free to use.

The Kinematics app allows for the creation of necklaces, bracelets and earrings. Users can sculpt the shape of their jewelry and control the density of the pattern. Designs created with Kinematics can be ordered in polished 3D-printed nylon in a variety of colors.

The Kinematics @ Home app is targeted at people who already have access to a 3D printer. It’s our first app that allows users to download an STL file for home printing. Enter your wrist size, style your bracelet and click print to receive a free STL file suitable for printing on a Makerbot or similar desktop printer.

kinematics@home bracelets printed on a makerbot

### Kinematics case study: making a dress

Concurrently with the development of the online applications, we’ve been working on a more advanced software with broader practical applications. Kinematics allows us to design a shape and then fold it into a more compressed form for 3D printing. Items we’ve created so far are flexible, but rigid objects could be created by introducing a hinge joint that locks at a preferred angle. Here we present an example of how Kinematics can be used to create a flexible dress that can be printed in one piece.

The process begins with a 3D scan of the client. This produces an accurate 3D model of the body upon which we draw the form of the desired dress. For this example, the top of the dress conforms exactly to the torso, but the skirt has a larger silhouette, allowing for the dress to drape and flow as the wearer moves.

The surface of the sketched dress is then tessellated with a pattern of triangles. The size of the triangles can be customized by the designer to produce different aesthetic effects as well as different qualities of movement in the dress (the smaller the triangle, the more flexible the structure / the more fabric like it behaves). Next we generate the kinematics structure from the tessellation. Each triangle becomes a panel connected to its neighbors by hinges. The designer can apply different module styles to these panels to create further aesthetic effects.

Finally, we compress the design via simulation so it fits into a 3D printer. This means that an entire gown, much larger than the printer itself, can be produced in a single assembled piece. The simulation uses rigid body physics to accurately model the folding behavior of the design’s nearly 3,000 unique, interconnected parts and find a configuration that fits inside the volume of the printer.

### the collection

Each jewelry design is a complex assemblage of hinged, triangular parts that behave as a continuous fabric in aggregate. Kinematics jewelry conforms closely to the contours of the human body. This is 21st-century jewelry, designed and manufactured using techniques that did not exist just a few years ago.

Kinematics pieces come in four styles: smooth, angular, polygonal and tetrahedral. Each design takes its name from the module style and number of pieces in the design. For example, Tetra Kinematics 174-n is a tetrahedral style necklace composed of 174 unique modules.

kinematics necklaces with smooth, tetra and polygonal modules

We’ve added eighteen Kinematics designs to our shop, and a limited initial run of each is currently available for purchase. Kinematics jewelry is made of polished 3D printed nylon in a variety of colors. Necklace, earring and bracelet designs are available; the bracelets and necklaces are fastened simply and securely with hidden magnetic clasps. Prices for the collection range from $25 to$350 and most pieces cost less than \$100.

# The Making of Kinematics

Posted: November 26th, 2013 | Author: Jesse Louis-Rosenberg | Filed under: 3dprinting, simulation, software | Tags: | 1 Comment »

Most of our projects start with a natural inspiration, but Kinematics emerged from a very different perspective. This project started with a technical problem: how can we create large objects quickly on a desktop 3D printer?

### Motorola

Last May, Motorola’s Advanced Technology and Projects division invited us to their headquarters in Sunnyvale to discuss a potential collaboration. They wanted us to develop “aesthetic generators” that related to their new phone, the Moto X. The catch was that these apps needed to generate customized objects that could be 3D printed in under an hour on equipment that was being driven around the country in the MAKEwithMOTO van. Despite what you may have been told, 3D printing is not a particularly fast process. In fact, the more three dimensional an object is, the slower it prints. One hour is a very challenging print time to meet for an object of any significant size.

The question we asked ourselves was how could we create something that was nearly flat, but still took advantage of the new possibilities in 3D printing. Our solution was to print a flat design that could be folded into another shape after printing. What we ended up creating was a NFC-enable bracelet made of a foldable geometric pattern.

### Physical Prototyping

From the beginning, this project was focused making the most of the limitations of low-cost 3D printers. Unlike most of our work, which occurs almost entirely digitally before we see a real object, this required extensive physical prototyping. We used our MakerBot Replicator (v1, dual extruder) throughout the prototyping period to develop and refine our concept.

Initially, we weren’t sure it was possible to design interlocking components that a desktop 3D printer could accurately reproduce while being small enough to comfortably wearable. But looking around the 3D printing community site Thingiverse, we found a diverse array of flexible structures all designed to be 3d-printed on low cost machines. Starting from there, we knew that it could be done.

We began by modelling a hinged joint mechanism based on a double-ended cone pin and socket. Cone-based geometry works well because, with the correct angle, it is self supporting, an essential quality for low-cost home printing. We spent a lot of time tweaking tolerances to get the hinge just right: tight enough to not fall apart but loose enough to not fuse together during printing. We kept refining the joint until it was as small as it could be and still print reliably.

With the joint designed, we started out printing simple chains of components. These basic configurations were already fun to play with, but we suspected they could be much more compelling. Taking origami tessellations as inspiration, we started making triangulated, foldable surfaces. Beginning with a regular tiling of equilateral triangles, we modeled the first assemblages entirely by hand. By using hinges to connect together small triangular panels, we were able to create a faceted, fabric-like material.

However, even modelling a simple, repetitive pattern is time consuming and difficult. Before we could continue, we needed to automate the generation of the hinge mechanisms on arbitrarily complex patterns. With that done, we could start to design tools that would let anyone morph and shape a pattern to create their own fabric-like creation. Early experiments also tried different ways we could style the modules or incorporate the multi-material extrusion available on newer desktop printers.

The results were compelling. Not only were were the pieces themselves addictive to play with, but it served as a case study in customization. Using the most inexpensive home printers, we could make complex, fully customized products in under an hour. However, as we worked on the project we realized the Kinematics system opened up a lot more possibilities.

### Tessellation to Kinematics

an early version of the tessellation app for Motorola

The original application for Motorola had many restrictions. In addition to the driving constraint of needing to quickly produce objects using a low-cost 3D printer, it also had to be used in-person in a van driving around the country. The geometry had to be limited to small objects that consistently printed well. The experience also had to be limited and highly directed. When someone used the app, their first step was walking up to a strange van filled with 3D printers, probably with no idea what was going on. So the app had to convey a lot of information: what someone was making, why they were making it, how to customize it, and how to get it printed. Because so much had to be presented in a short period of time, it was necessary to make the procedure very linear.

Freed from these constraints, we were able to develop a version of the app that was much more open-ended, both in terms of the geometry and the experience. We designed a new hinge for our 3D printing method of choice, SLS. This allowed us to create larger pieces and modules with more complex shapes. We also completely changed how the pieces were designed. Instead of morphing a fixed tessellation, users can manipulate parametric curves to create various shapes that are tessellated on the fly. They can also dramatically adjust the density of triangles, making the results more varied and freeform.

The most exciting thing about switching from extrusion-based to powder-based printing was that we could now design objects that were not self supporting. Though kinematics was originally developed to print three dimensional objects flat, allowing objects to be anywhere in space opened up new possibilities. The fabric-like quality of the designs we were producing got us thinking about making larger three-dimensional wearables like dresses. We realized that Kinematics had broader implications for printing arbitrary objects. We can take any shape and transform it into a flexible structure. These structures can then be digitally folded into more compressed shapes enabling the construction of objects much larger than the 3D printer’s build volume.

a kinematisized stanford bunny

### Software Development

Kinematics is our first application that is written purely JavaScript. Our previous web apps evolved from work we had created in Processing and contained large portions facilitated by processing.js to transition to the web. This made for a messy development process. Kinematics was developed from the ground up to be a browser-based WebGL application, and the code does not rely on any frameworks.

The project makes use of two libraries. One is glMatrix, which we use in all our projects for vector and matrix operations. It is a simple and fast library that does one function and does it well. This is exactly the type of library I love to use: flexible enough to fit in any situation and not bloated with unnecessary functionality.

The other library we use is poly2tri, which produces constrained Delaunay triangulations. Kinematics requires well-shaped triangles inside of a designed boundary, so this is exactly the tool for the job. The JavaScript version is a port of the original C++ code. Unfortunately, I don’t think it is the most performant for modern Javascript, but I’m not going to make my own constrained Delaunay library, and poly2tri does the job.

We’ve also started internally developing modular code components which we can apply to other projects. glShader takes care of loading and processing of GLSL shader programs. It asynchronously loads external shader files and extracts all the attributes and uniforms from the shaders, providing helper functions to simplify working with WebGL.

The project uses a JavaScript NURBS library that we’re developing. This allowed us to design curves for the boundaries of the pieces in Rhino and then import them into Kinematics where users can interactively change them. We’re also expanding and refining tools we created before for loading and working with meshes in JavaScript

The simulation portion of the Kinematics project happens outside the browser. We use openFrameworks and BulletPhysics to perform the compression of Kinematics models. BulletPhysics is an open-source physics engine used primarily for rigid body mechanics in games. It is a powerful and fast tool for physics simulation, supporting constraints, collisions, forces, and even soft bodies. There is a browser-based port of Bullet that we are in the process of incorporating as well.

a computationally compressed dress

### acknowledgements

Special thanks to Motorola ATAP for getting us started down this road (especially Daniel, Andrew, and Paul). Thank you Thingiverse for inspiring us and Makerbot for giving us the printer we used for prototyping. And thank you to Artec 3d for proving the 3D scan we used to develop the concept dress.

# Calculating the volume of a mesh

Posted: November 4th, 2013 | Author: Jesse Louis-Rosenberg | Filed under: computation, geometry, software | No Comments »

In order to generate the price of a custom design on the fly, we need to calculate the volume of the piece for 3d printing. By constantly updating the volume, the customer gets instant feedback on how their changes are affecting price. Calculating the volume of a mesh is a relatively simple and well-known problem, and I’ll go over the straight forward case as well as an optimization we’ve incorporated into our latest project.

The Basics

The idea behind calculating the volume of a mesh is to calculate a volume for each triangle of the mesh and add them up. Now, a triangle itself does not have volume; it is two dimensional. Instead we calculate the volume of a tetrahedron which goes from the origin (0,0,0) to the triangle.

There is neat equation for calculating volume of a tetrahedron. Given a triangle’s points v1, v2, v3 the tetrahedrons volume is

$V = \frac16 (v_1 \times v_2) \cdot v_3$

Another way to express this is if we have a 3×3 matrix, where each row is one of our vertices the volume is a sixth of the determinant. The division by six comes from the fact that determinant is actually the volume of the parallelpiped formed by the three vectors, and you can stuff 6 tetrahedrons into the parallelpiped.

But wait, if I add up all these tetrahedrons don’t I get a mess of overlapping volumes that go to the origin? Yes, but the key thing is that these volumes are signed, so they can be negative depending on the vertex winding. If my points go one way (v1->v2->v3) I get a positive volume and if they go the other way (v1->v3->v2) I get a negative volume. Faces pointing out add to the total volume and faces pointing in subtract. What is left is only the volume inside my object. To get the total volume of a mesh, we go through each triangle, compute its signed volume, and add them up.

Below is some javascript code for computing the volume of a mesh stored in something like a Vertex Buffer Object.

Volume of repeated elements

Now, onto the good stuff. What happens if you have an object that is made of (at least in part) an aggregate of a bunch of identical but complex parts. I don’t mean a booleaning together primitives, but you could imagine something like a buckyball where each face is articulated with some kind of intricate shape. The brute force approach would be to move and rotate the shape to the proper position then go through each triangle and calculate the volume. This means you have to calculate a transform on each of the points of your shape, and then go through each triangle. If your shape has 1000 triangles and you have 100 shapes, that ends up being a lot calculation. We can drastically increase the efficiency of this by computing a “general volume” for the shape once, and applying our transforms only to that simplified representation. But what does this general volume look like?

Rotation

The key idea behind this general volume is the fact that volume is rotation invariant. This is one of the basic results of differential geometry. It is intuitively obvious; no matter how I orient an object in space its volume does not change. What is less intuitive is that the same thing holds true for the signed volume of open shapes. Mathematically this can be seen easily by noting that the volume is the determinant of a matrix, and the rotation matrix has a determinant of 1. The determinant of one matrix multiplied by another is the multiplication of their individual determinants. So, I can rotate my primitive element however I want, and the volume stays the same. If I was only rotating my shape, then I could calculate the volume of my shape once and multiply it by the number of shapes I have.

Translation

That only leaves translation or moving my shape around in space. We can look at this intuitively in a simplified 2D example with the area of a single line segment. The area of a triangle is one half base time height. If I translate my line segment in the x direction by some amount, I am adding that amount to my height. So the area of my translated line is:

$A_t = \frac12 b (h+x) = A + \frac12 bx$

I take my original area, and I add on some amount multiplied by my x translation. Though this is a very dumbed down example, we can do essentially the same thing for each axis (x,y,z) in our volume.

Nitty gritty

To see how this idea applies to our volume calculation, we can look at the expanded equation for the volume of each triangle, where $v_1 = (x_1,y_1,z_1)$

$V = x_1y_2z_3-x_2y_1z_3+x_3y_1z_2-x_1y_3z_2+x_2y_3z_1-x_3y_2z_1$

That may look like a lot of equation, but if look at each individual term, we notice that it is the sum of terms that look like an x component times a y component times a z component. Isolating an arbitrary term if we translate along one axis, we get something similar to our simple 2d example:

$V_t = x_1y_2(z_3+z_t) = V + x_1y_2z_t$

You might say, that is only translating one direction, what happens when you are doing an arbitrary translation? It turns out because of the way the terms are organized in positive and negative pairs, every term besides the one in one directional example cancels out. So our general volume becomes a vector which sums up the terms for each axis. Each axis term is all the pairs of vertex coordinates that don’t include that axis:

$V_x = y_2z_3-y_1z_3+y_1z_2-y_3z_2+y_3z_1-y_2z_1$
$V_y = x_1z_3-x_2z_3+x_3z_2-x_1z_2+x_2z_1-x_3z_1$
$V_z = x_1y_2-x_2y_1+x_3y_1-x_1y_3+x_2y_3-x_3y_2$

Just like our regular volume, we can just add these up for each triangle in our shape. Our final volume calculation becomes:

$V_t(x_t,y_t,z_t) = V+V_xx_t+V_yy_t+V_zz_t$

This is great not only because it allows us to calculate the volume without going through each triangle, but also we don’t even have to know how our shape is oriented! This leads to some strange facts, like if I randomly rotate each of my shapes but put them in the same spot it has the same volume.

# Data structures: Pairing Heap

Posted: March 28th, 2013 | Author: Jesse Louis-Rosenberg | Filed under: computation, data structures | 1 Comment »

Recently, I came across a neat data structure called a pairing heap. Now, if you’re not familiar with the term heap it can sound confusing. That’s because it doesn’t mean much. A heap is basically a set of objects with some kind of ordering that we don’t have an intuitive word for. Discard any notions of a heap as a pile of stuff. Specifically, it is a hierarchical tree (or forest) where each parent has a certain relationship to its children (like being less than them).

Pairing heaps are used for an efficient implementation of a priority queue. A priority queue keeps track of the minimum of a set of objects, so every time you take something off the queue it is always the lowest value. Priority queues are most notably used when implementing Dijkstra’s algorithm to find the shortest path in a graph. A lot of useful algorithms are based on Dijkstra’s algorithm, including one to compute geodesic distances on meshes which is what we are using pairing heaps for.

Pairing heaps are neat because they are simple to implement and perform well in real applications. Specifically, they perform very well in amortized time. Meaning that while an individual operation may take a longer time, the aggregate of all the operations over the entire life cycle of queue is fast. They have similar properties to the more well known Fibonacci heap, but they are easier to code and often perform better.

Pairing heaps have very simple properties. Each heap is associated with an object or value. Each heap also has a set of child heaps. The value of the object is always less than (or greater than) that of its child heaps. That’s it.

The heap has a few basic operations:

min(heap) – Get the smallest value. Easy. Just look at the value at the top of the heap.

merge(heap1, heap2) – Combine two heaps. Add the heap with greatest value to the children of the other heap. Also fast.

insert(heap, value) – Add a new value. Merge the heap with a new heap only containing the new value.

removeMin(heap) – Remove the smallest object. A little more complicated. Recursively merge all the child heaps in pairs. The result is your new heap.

decreaseKey(heap, subheap, new_value) – Decrease the value of an object in the heap. removeMin on the subheap of the value you are replacing. Insert the new value into to top of the heap.

Here’s a gist of an implementation of a pairing heap in javascript

# OBJExport library – export color meshes from Processing

Posted: March 13th, 2013 | Author: Jesse Louis-Rosenberg | Filed under: software | Tags: , | 1 Comment »

We’ve just released a new Processing 2.0 library for exporting OBJ and X3D mesh files. And it supports color meshes! Now you can export color models for 3D-printing with same commands you use to draw. Get started here: OBJExport library page.

The library works like any PGraphics, such as the PDF library. Simply call

createGraphics(width, height, "nervoussystem.obj.OBJExport", filename)

to get the PGraphics for obj export and use regular Processing drawing commands.

I’ve also started to use GitHub to manage the code for this and other projects. You can find the github page for the library here. This page can be used to peruse the source, fork the project or report bugs.

This library is a re-release of an old library I made years ago. I had sort of forgotten about it until I got an email last week from Michael Zoellner of the Interactive Design program at Hof University. Apparently, people are still using it! He wanted to update the library to be compatible with the new Processing release.

At the same time, we’ve started to do some work color 3d printing. However, to get printable models we had to go through some tedious processing MeshLab. So, I took this opportunity to overhaul the library, adding new features like color exporting and fixing some old bugs.

We’re excited to see what the community does with the library. We’re using it to make color 3d-prints, what are you going to do? Try it out and send any feedback to jesse@n-e-r-v-o-u-s.com

# Surface meshing pt 2 – Surface remeshing

Posted: February 4th, 2013 | Author: Jesse Louis-Rosenberg | Filed under: Uncategorized | No Comments »

The next iteration of the surface meshing algorithm from the previous post is surface remeshing. Surface meshing is the process of taking an existing mesh and generating a new mesh of the same surface. This can be useful in many scenarios — for example, we’re using a 3D modeling program to design a surface that we want to use in a simulation. Unfortunately, the mesh that the modeling software creates isn’t designed for simulation. It might have very big triangles, triangles with distorted shapes, way too many triangles, or other problems. You know, triangle problems. Surface meshing allows us to generate a new surface of roughly the same shape but with a uniform density of points and consistent elements.

The procedure is fairly straight forward:

1. Begin with an existing mesh.
2. Generate evenly spaced points on that surface. We do this by generating random points on the surface and discarding any that lie within a certain distance of a point that has already been added. We accelerate this process with a simple spatial partitioning data structure.
3. Run the Ball Pivot Algorithm on that set of points. The points have no connection to the original surface once they are generated.

You can play with an initial implementation here. Warning: this is very buggy and likely to crash, generate meshes with holes, or create fairly ugly meshes, depending on the settings.

# Surface meshing in the browser

Posted: January 30th, 2013 | Author: Jesse Louis-Rosenberg | Filed under: software | 1 Comment »

precision mediump float;
varying vec3 vTransformedNormal;
varying vec4 vPosition;
uniform vec3 uMaterialDiffuseColor;
void main(void) {
vec3 materialDiffuseColor = uMaterialDiffuseColor;
vec3 ambientLightWeighting = vec3(.2,.2,.2);
vec3 directionalColor = vec3(.8,.8,.8);
vec3 normal = normalize(vTransformedNormal);
vec3 uLightingDirection = vec3(-0.57,-0.57,-0.57);
float diffuseLightBrightness = abs(dot(normal,uLightingDirection));
vec3 diffuseLightWeighting = diffuseLightBrightness*directionalColor;
vec3 r = -reflect(uLightingDirection, normal);
r = normalize(r);
vec3 v = -vPosition.xyz;
v = normalize(v);
vec4 spec = vec4(.5,.5,.5,1.0)*pow(max(0.0,dot(r, v)), 16.0);
gl_FragColor = vec4(ambientLightWeighting * materialDiffuseColor
+ materialDiffuseColor * diffuseLightWeighting,
1.0)+spec;
// gl_FragColor = vec4(0,0,0,1.0);
}

attribute vec3 aVertexPosition;
attribute vec3 aVertexNormal;
uniform mat4 uMVMatrix;
uniform mat4 uPMatrix;
uniform mat3 uNMatrix;
uniform vec3 uAmbientLightingColor;
uniform vec3 uDirectionalDiffuseColor;
varying vec3 vTransformedNormal;
varying vec4 vPosition;
void main(void) {
gl_PointSize = 3.0;
gl_Position = uPMatrix * uMVMatrix * vec4(aVertexPosition, 1.0);
vPosition = uMVMatrix * vec4(aVertexPosition, 1.0);
vTransformedNormal =  aVertexNormal;
}

var canvas;
var mouseX,mouseY,pmouseX,pmouseY,isMouseDown,mouseDragging;
var ctx;
var paused = false;
function init() {
canvas = document.getElementById("test");
depth = canvas.width/8.0;
//ctx = canvas.getContext("2d");
initGL(canvas);
initBuffers();
NScamera.distance = 500;
vec3.set(NScamera.center, canvas.width/2, canvas.height/2,20);
initPts();
initialize();
step();
gl.enable(gl.DEPTH_TEST);
}
var ptGrid;
var density = 10;
var depth = 0;
function initPts() {
pts = new Array();
var weightsX = [Math.random()*80+20,Math.random()*70+15,Math.random()*65+5,Math.random()*60+5,Math.random()*45+5,Math.random()*25+5];
var weightsY = [Math.random()*80+20,Math.random()*70+15,Math.random()*65+5,Math.random()*60+5,Math.random()*45+5,Math.random()*25+5];
var totalWeight = 0;
var len = weightsX.length;
for(var i=0;i< len;++i) {
totalWeight += weightsX[i] + weightsY[i];
}
vec3.set(NScamera.center, canvas.width/2, canvas.height/2,totalWeight/2.0);
var gw = Math.ceil(canvas.width/density);
var gh = Math.ceil(canvas.height/density);
var gd = Math.ceil(totalWeight/density);
var gwh = gw*gh;
ptGrid = new Array(gw*gh*gd);
for(var i=0;i< ptGrid.length ;++i) ptGrid[i] = new Array();
var tries = 0;
var x = 0, y = 0;
var p;
var dx = vec3.create();
var dy = vec3.create();
while(tries < 2000) {
x = Math.random()*canvas.width;
y = Math.random()*canvas.width;
if((x-canvas.width/2.0)*(x-canvas.width/2.0)+(y-canvas.height/2.0)*(y-canvas.height/2.0) < 250*250) {
z = 0;
for(var i=0;i< len ;++i) {
z += (Math.cos(x/canvas.width*Math.PI*(2+i*1.5))+1)*0.5*weightsX[i];
z += (Math.sin(y/canvas.height*Math.PI*(2+i*1.5))+1)*0.5*weightsY[i];
}
var xi=Math.floor(x/density);
var yi=Math.floor(y/density);
var zi=Math.floor(z/density);
var hit = false;
for(var i=Math.max(0,xi-1);i<=Math.min(gw-1,xi+1);++i) {
for(var j=Math.max(0,yi-1);j<=Math.min(gh-1,yi+1);++j) {
for(var k=Math.max(0,zi-1);k<=Math.min(gd-1,zi+1);++k) {
for(var q=0;q < ptGrid[k*gwh+j*gw+i].length;++q) {
p = ptGrid[k*gwh+j*gw+i][q];
if((p.pt[0]-x)*(p.pt[0]-x)+(p.pt[1]-y)*(p.pt[1]-y) < density*density) hit = true;
}
}
}
}
if(!hit) {
tries = 0;
var pt = new Vertex();
vec3.set(pt.pt,x,y,z);//(x*x+y*y)/10000.0
ptPos[pts.length*3] = pt.pt[0];
ptPos[pts.length*3+1] = pt.pt[1];
ptPos[pts.length*3+2] = pt.pt[2];
ptPosBuff.numItems++;
pts.push(pt);
ptGrid[zi*gwh+yi*gw+xi].push(pt);
}
tries++;
}
}
gl.bindBuffer(gl.ARRAY_BUFFER, ptPosBuff);
gl.bufferData(gl.ARRAY_BUFFER, ptPos, gl.STATIC_DRAW);
}
var pivotEdge = null;
function keyPress() {
reset();
}
function reset() {
ptPosBuff.numItems = 0;
initPts();
initialize();
vertexPosBuff.numItems = 0;
}
var normalMatrix = mat3.create();
function draw() {
NScamera.step();
NScamera.velocityY = (.005+NScamera.velocityY*3)/4.0;
NScamera.velocityX = (.003+NScamera.velocityX*3)/4.0;
gl.viewport(0, 0, gl.viewportWidth, gl.viewportHeight);
gl.clear(gl.COLOR_BUFFER_BIT | gl.DEPTH_BUFFER_BIT);
mat4.perspective(pMatrix,45, gl.viewportWidth / gl.viewportHeight, 0.1, 10000.0);
mat4.identity(mvMatrix);
NScamera.feed(mvMatrix);
mat4.toInverseMat3(mvMatrix,normalMatrix);
mat3.transpose(normalMatrix,normalMatrix);
gl.bindBuffer(gl.ARRAY_BUFFER, ptPosBuff);
gl.vertexAttribPointer(shaderProgram.vertexPositionAttribute, ptPosBuff.itemSize, gl.FLOAT, false, 0, 0);
gl.bindBuffer(gl.ARRAY_BUFFER, vertexNormBuff);
gl.vertexAttribPointer(shaderProgram.vertexNormalAttribute, vertexNormBuff.itemSize, gl.FLOAT, false, 0, 0);
gl.drawArrays(gl.POINTS,0,ptPosBuff.numItems);
var norm = vec3.create();
for(var i=vertexPosBuff.numItems/3;i < triangles.length;++i) {
planeNormal(norm,triangles[i][0].pt,triangles[i][1].pt,triangles[i][2].pt);
vec3.normalize(norm,norm);
for(var j=0;j < 3;++j) {
vertexPos[vertexPosBuff.numItems*3] = triangles[i][j].pt[0];
vertexPos[vertexPosBuff.numItems*3+1] = triangles[i][j].pt[1];
vertexPos[vertexPosBuff.numItems*3+2] = triangles[i][j].pt[2];
vertexNorm[vertexPosBuff.numItems*3] = norm[0];
vertexNorm[vertexPosBuff.numItems*3+1] = norm[1];
vertexNorm[vertexPosBuff.numItems*3+2] = norm[2];
vertexPosBuff.numItems++;
}
}
gl.bindBuffer(gl.ARRAY_BUFFER, vertexPosBuff);
gl.bufferData(gl.ARRAY_BUFFER, vertexPos, gl.DYNAMIC_DRAW);
gl.vertexAttribPointer(shaderProgram.vertexPositionAttribute, vertexPosBuff.itemSize, gl.FLOAT, false, 0, 0);
gl.bindBuffer(gl.ARRAY_BUFFER, vertexNormBuff);
gl.bufferData(gl.ARRAY_BUFFER, vertexNorm, gl.DYNAMIC_DRAW);
gl.vertexAttribPointer(shaderProgram.vertexNormalAttribute, vertexNormBuff.itemSize, gl.FLOAT, false, 0, 0);
gl.drawArrays(gl.TRIANGLES,0,vertexPosBuff.numItems);
}
(function() {
var requestAnimationFrame = window.requestAnimationFrame || window.mozRequestAnimationFrame || window.webkitRequestAnimationFrame;
window.requestAnimationFrame = requestAnimationFrame;
}) ();
function step() {
window.requestAnimationFrame(step);
//just one boundary for now
if(!paused) {
for(var i=0;i < boundaries.length;++i) {
var bound = boundaries[i].e;
var curr = bound.next;
var dumb = curr.active == 0;
if(curr == bound) dumb = false;
while(dumb ) {
curr = curr.next;
dumb = curr.active == 0;
if(curr == bound) dumb = false;
}
if(curr.active == 1) {
pivot(curr);
pivotEdge = curr;
}
}
}
draw();
}
var vertexPos;
var vertexPosLen = 0;
var vertexNorm;
var ptPos;
var ptPosBuff;
var vertexPosBuff;
var vertexNormBuff;
function initBuffers() {
ptPos = new Float32Array(3*10000);
ptPosBuff = gl.createBuffer();
vertexPos = new Float32Array(3*3*10000);
vertexPosBuff = gl.createBuffer();
vertexNorm = new Float32Array(3*3*10000);
vertexNormBuff = gl.createBuffer();
gl.bindBuffer(gl.ARRAY_BUFFER, vertexPosBuff);
gl.bufferData(gl.ARRAY_BUFFER, vertexPos, gl.DYNAMIC_DRAW);
vertexPosBuff.itemSize = 3;
vertexPosBuff.numItems = 0;
gl.bindBuffer(gl.ARRAY_BUFFER, vertexNormBuff);
gl.bufferData(gl.ARRAY_BUFFER, vertexNorm, gl.DYNAMIC_DRAW);
vertexNormBuff.itemSize = 3;
vertexNormBuff.numItems = 0;
gl.bindBuffer(gl.ARRAY_BUFFER, ptPosBuff);
gl.bufferData(gl.ARRAY_BUFFER, ptPos, gl.DYNAMIC_DRAW);
ptPosBuff.itemSize = 3;
ptPosBuff.numItems = 0;
}
}
}
function mouseDraggedMan(event) {
NScamera.mouseDragged(mouseX-pmouseX,-(mouseY-pmouseY), mouseX, mouseY,mouseButton);
}
function mouseUpMan(event) {
isMouseDown = false;
}
function mouseMoveMan(event) {
pmouseX = mouseX;
pmouseY = mouseY;
mouseX = event.clientX;
mouseY = event.clientY;
if(isMouseDown) {
mouseDragging = true;
mouseDraggedMan(event);
}
}
function mouseDownMan(event) {
isMouseDown = true;
mouseDragging = false;
mouseButton = event.which;
}
mat4.toInverseMat3=function(a,b){var c=a[0],d=a[1],e=a[2],g=a[4],f=a[5],h=a[6],i=a[8],j=a[9],k=a[10],l=k*f-h*j,o=-k*g+h*i,m=j*g-f*i,n=c*l+d*o+e*m;if(!n)return null;n=1/n;b||(b=mat3.create());b[0]=l*n;b[1]=(-k*d+e*j)*n;b[2]=(h*d-e*f)*n;b[3]=o*n;b[4]=(k*c-e*i)*n;b[5]=(-h*c+e*g)*n;b[6]=m*n;b[7]=(-j*c+d*i)*n;b[8]=(f*c-d*g)*n;return b};

init();



A common problem we come across is needing to make meshes of complex surfaces. This can be done with many different approaches, but regardless of the method, it is often quite difficult. Especially when doing simulations on surfaces, you need a well-behaved triangle mesh, that is one with a controllable density of elements and nicely shaped triangles. This can be done by generating a set of points on that surface and then triangulating them. But triangulating surfaces embedded in 3-space is no simple task. Recently, I came across an interesting approach to this problem which works especially well with generated point clouds called the Ball-Pivot Algorithm. I implemented it in javascript (you should see it running above).

Let’s start with the basic idea of triangulation in 2D. A triangulation of a set of points is just a set of triangles that covers all those points and don’t overlap.

Now not all triangulations are created equal. We want to make triangulations whose triangles are nicely shaped, as close to equilateral as possible. This leads to better numerical stability and convergence in simulations.

This leads to something called the Delaunay triangulation. This is a really neat triangulation defined such that circumcircle of any triangle in the triangulation does not contain any other points. The circumcircle is the smallest circle enclosing the triangle. One thing that’s not entirely obvious is that this triangulation always exists. And the thing that is great about this triangulation is that it maximizes the minimum angle of the triangles. This means it gives us the best shaped triangles we can get with the points we have.

The problem comes when we make the jump from 2D to 3D. We can generalize the idea of Delaunay triangulations to 3D, giving us tetrahedron instead of triangles and circumspheres instead of circumcircles. However, what we want is surface of triangles embedded in 3-space, not a 3 dimensional solid. One way to get around this is called alpha shapes. This is a subset of the 3D Delaunay triangulation but only including faces of a certain size. Alpha shapes can create good surface triangulations; however, it is necessary to compute the full 3D Delaunay, which is complex in computation and implementation as well as numerically unstable.

So in comes the Ball-Pivot algorithm. This algorithm is nice because it works in linear time for points of a constant density, and it is exceptionally cute conceptually. The idea is you have a ball of a certain radius sitting in a triangle. This ball rolls over the edge of the triangle and keeps rolling until it hits another point. Then, that edge and the point it hits make a new triangle which is added to the triangulation, and the ball rolls over a new edge. If you start with a good triangle, such that the ball contains no other points, then the new triangle the ball rolls onto won’t contain any other points either because any point would have been hit before hand. This maintains the Delaunay property! In fact, the ball pivot algorithm produces a subset of an alpha shape.

There are a few caveats. If the curvature and density of the surface points is too high, a larger ball won’t be able to fit in the necessary crevices, missing some points. If the ball is too small, it can fall through large triangles and leave holes. While these issues can worked around, the ball pivot algorithm works best for surfaces with a consistent density of points.

Feel free to look at the code inside, but it is not optimized and not pretty. This was thrown together in 2-3 days, so it definitely lacks polish.

# Puzzle pattern repair and optimization

Posted: January 23rd, 2013 | Author: Jesse Louis-Rosenberg | Filed under: puzzles, software | 1 Comment »

This week I revamped the automatic post-processing of puzzle files. Previously, there was a lot of manual work that had to be done to the file before they were ready to be laser cut. Now, I’ve reduced a lot of that labor and also made it so that the puzzle pieces will be generated with much more consistent details and nubs.

The problems

There are a number of problems that have to be found and fixed by hand. Many of these issues occur around whimsies. Some pieces are generated too small to be a usable piece. Other times pieces get merged during the simulation. This occurs when two pieces which have the same phase grow into each other during the simulation. This can occur because the simulation employs an optimization of only having 5 phases rather than each piece being its own phase. Some areas are just too thin.

Fixing overview

Fixing occurs in two stages. The first part occurs in pixels space before the boundaries have been extracted. This part hasn’t changed. The second stage works in vector space, once everything is represented as polylines. To get from the first stage to the second stage we used a modified marching squares algorithm.

The difficulty

What makes the fixing tricky is that conditions that define what needs to be fixed and how are highly non-local. I cannot simply say nothing can be under X thickness. The more material a connection holds, the thicker it needs to be. Thin areas between two merged pieces need to be split, not thickened. Even thickness is a bit tricky to calculate. Additionally, there can be conflicts where thickening one area makes another too thin.

Marching squares

To explain the fixing procedure in detail, I’ll start at the beginning with the creation of the vector representation. The pixel representation is turned into vectors with a generalized marching squares algorithm that allows for an arbitrary number of states. During this process each point only knows which points it is connected to. There is no notion of pieces having any continuity.

Computing thickness

To compute the thickness of each point, I find the distance to the nearest non-degenerate point. I don’t bother to be concerned with distance to a line segment because the curves are so dense. By non-degenerate point, I mean that points that are directly next to another should not be considered for thickness computation. So for each point I compute a local neighborhood that extends out a certain distance using a depth first search of the neighbors. The points in this neighborhood are considered degenerate.

Minimum thickness computation

To compute how thick a connection needs to be, we need to know how much material it supports. This is estimated by the linear distance along the piece from one end of the connection to the other, rather than trying to compute the actual area. We could compute this distance with a breathe first search, but that would be expensive O(n^2).

Instead, we note that two nearest points must lie within the same piece. This way if we extract each puzzle piece and label each point in the piece with the distance along it starting at an arbitrary point, we can do a simple calculation to get the distance between any two points. It is constant time.

Extracting the pieces themselves isn’t entirely trivial. This equates to finding all minimum cycles in a graph. However, we have a clue that makes this a bit easier. Each point knows the colors of the pixels that is was generated in between, and we can tag them with this information. Because of this if you trace along an arbitrary point’s neighbors that all have the same color, you get a piece. This creates a simple linear time procedure for extracting all pieces.

Thickening

The thin regions are thickened by simply moving a point away from the nearest point, such that it is the minimum distance calculated from the material supported. All points in the local neighborhood of moved point are then marked.

It should be noted that special consideration needs to be paid to boundary pieces, which need to treat the edge of the puzzle slightly differently.

Selective smoothing

All points that have been marked by the thickening process are then smoothed with a Laplacian smoothing procedure. This alleviates any strange geometry that might arise from movement.

These processes of marking, thickening, and smoothing are repeated in a relaxation procedure that resolving conflicting situations when thickening one area overly thins another.

Small and merged pieces

All of these procedures only deal with thin regions. So far, there are no automatic fixes for small and merged pieces. Instead these are marked on export for manual fixing. Merged pieces are heuristically identified simply by their size, which can lead to false positives, but that is easily distinguished by a manual operator. With the pieces already extracted, marking small are large pieces is trivial.

Results

Here are the resulting files. The two marked “extract and mark” are identical and show the output of marking the pieces before they are fixed. Click on them for a larger view.

# Nervous System weekly roundup 12/31/12-01/6/13

Posted: January 9th, 2013 | Author: Jesse Louis-Rosenberg | Filed under: work in progress | Tags: | No Comments »

We’re starting a new tradition at Nervous System of sharing the work we do each week. Although the timing may be suspect, this is not a new years resolution, just something we happened to think of while on holiday. Here’s a summary of week 1.

Jesse has begun working on a new design system based on models of differential growth. Currently, it’s just at the phase of initial implementation and conceptualization.

Nathan added a search box on our website with autocomplete, so you can more easily find what you are looking for. Or you can look for everything that comes in black and is 3D-printed.

Jessica is playing with our old barnacle sketch in new ways. She’s looking into making some neat organic animations and colored 3D prints.

Aaron has created a new email address for organizing all the supplies and manufacturers we work with.

Posted: November 5th, 2012 | Author: Jesse Louis-Rosenberg | Filed under: software, work in progress | 3 Comments »

For a new app we’re working on, we finally came upon the task of loading external 3D geometry. I wanted to store and load meshes with as little bandwidth and processing as possible. I’m sure this has been done before, but here is how I approached it.

Rather than loading a mesh in a common file format (eg stl or obj), processing it, and putting into arrays that I could send to a gl buffer, I wanted to load binary data that could go directly to the GPU. So I created a binary representation of a mesh that exactly mirrors the data I would send to an array buffer. It is a list of 32-bit floats representing the vertex data (6 for each vertex with position and normals) followed by a list of 16-bit integers representing triangle indices. Obviously, this is not a very general file format, but it is exactly the data I need for the shaders I was using. The only additional data is two integers at the start of the file representing the number of vertices and faces.

This data can be directly fetched with an http request as an ArrayBuffer object. This ArrayBuffer can be accessed by creating a Float32Array for the vertex data and Uint16Array for the index data. I don’t even have to allocate new storage. Both the vertex and index arrays use the same ArrayBuffer, just with different offsets.

This is surprisingly simple, and the hard work really comes in encoding the mesh data to begin with. This was in fact extra tricky because I had to deal with endian issues. Endian refers to the order in which the bytes of a number are read. Little-endian means the least significant byte comes first, and big-endian means the most significant byte comes first. When you create a TypedArray from an ArrayBuffer, it just sees a list of bytes and it doesn’t know what byte order they are in. Javascript assumes it is the same as the underlying system architecture, which can vary. It turns out the majority of common systems (x86, x86-64, IOS) are all little-endian. So I generally feel I can ignore big-endian systems and just make my files little-endian. However, Java, with which I was generating the meshes, creates big-endian numbers by default. This caused me much grief for a few hours, as everything was working fine except for the fact that my numbers were mysteriously gibberish.

Below you can find the code I used for encoding and decoding the meshes. The AJAX portion is largely copied from an online example.


precision highp float;

varying vec4 vPosition;
varying vec3 vNormal;

void main(void) {
vec3 normal = vNormal;

vec3 lightDir = normalize(vec3(1,1,1));
float brightness = (dot(lightDir,normal)+1.0)/2.0; //-1,1.5
gl_FragColor = vec4(brightness,brightness, brightness,1);
}

attribute vec3 aVertexPosition;
attribute vec3 aVertexNormal;

uniform mat4 uMVMatrix;
uniform mat4 uPMatrix;
uniform mat3 uNMatrix;

varying vec4 vPosition;
varying vec3 vNormal;
void main(void) {
gl_Position = uPMatrix * uMVMatrix * vec4(aVertexPosition, 1.0);
vPosition = uMVMatrix * vec4(aVertexPosition, 1.0);//uMVMatrix *
vNormal = uNMatrix * normalize(aVertexNormal);
}

var loadvboExample = (function () {
var gl;
var vbo = {};
var rot = 0;
var mvMatrix = mat4.create();
var tempMatrix2 = mat4.create();
var tempMatrix = mat4.create();
var pMatrix = mat4.create();
var nMatrix = mat3.create();

window.requestAnimFrame = (function(){
return  window.requestAnimationFrame       ||
window.webkitRequestAnimationFrame ||
window.mozRequestAnimationFrame    ||
window.oRequestAnimationFrame      ||
window.msRequestAnimationFrame     ||
function( callback ){
window.setTimeout(callback, 1000 / 60);
};
})();

var init = function() {
if(gl) {
gl.enable(gl.DEPTH_TEST);

}

anim();
}
};

function anim() {
requestAnimFrame(anim,60);
drawGL();
}

mat4.toInverseMat3=function(a,b){var c=a[0],d=a[1],e=a[2],g=a[4],f=a[5],h=a[6],i=a[8],j=a[9],k=a[10],l=k*f-h*j,o=-k*g+h*i,m=j*g-f*i,n=c*l+d*o+e*m;if(!n)return null;n=1/n;b||(b=mat3.create());b[0]=l*n;b[1]=(-k*d+e*j)*n;b[2]=(h*d-e*f)*n;b[3]=o*n;b[4]=(k*c-e*i)*n;b[5]=(-h*c+e*g)*n;b[6]=m*n;b[7]=(-j*c+d*i)*n;b[8]=(f*c-d*g)*n;return b};

function drawGL() {
gl.viewport(0, 0, gl.viewportWidth, gl.viewportHeight);
gl.clear(gl.COLOR_BUFFER_BIT | gl.DEPTH_BUFFER_BIT);
rot += Math.PI/300.0;
mat4.perspective(pMatrix,45, gl.viewportWidth / gl.viewportHeight, 0.1, 300.0);
mat4.identity(mvMatrix);
mat4.rotateY(mvMatrix, mvMatrix,rot);

mat4.multiply(tempMatrix,mat4.lookAt(tempMatrix2,[0,0,150],[0,0,0],[0,-1,0]),mvMatrix);
var temp = mvMatrix;
mvMatrix = tempMatrix;
tempMatrix = temp;
mat4.toInverseMat3(mvMatrix, nMatrix);
mat3.transpose(nMatrix,nMatrix);

gl.bindBuffer(gl.ARRAY_BUFFER, vbo.vertexBuffer);

gl.bindBuffer(gl.ELEMENT_ARRAY_BUFFER, vbo.indexBuffer);
gl.drawElements(gl.TRIANGLES,vbo.numFaces*3, gl.UNSIGNED_SHORT,0);

}

function initGL(canvas) {
try {
gl = canvas.getContext("experimental-webgl"); //preserve drawing buffer
gl.viewportWidth = canvas.width;
gl.viewportHeight = canvas.height;
} catch (e) {
}
if (!gl) {
canvas.display = "none";
}
}

return null;
}

var str = "";
while (k) {
if (k.nodeType == 3) {
str += k.textContent;
}
k = k.nextSibling;
}

} else {
return null;
}

return null;
}

}
return init;
}

)();



Processing code for encoding an obj in binary

String folder = "SOMEFOLDER";
String file = "filename.obj";

ArrayList<PVector> srf = new ArrayList<PVector>();
ArrayList<PVector> norm = new ArrayList<PVector>();
ArrayList<int[]> faces = new ArrayList<int[]>();

void setup() {
noLoop();
}

void draw() {
writeSrf(folder+file.substring(0,file.length()-3)+"vbo");
exit();
}

srf.clear();
norm.clear();
faces.clear();
for(int i=0;i<lines.length;++i) {
if(lines[i].length() > 2) {
//vertices
if (lines[i].substring(0,2).equals("v ")) {
float info[] = float(split(lines[i], " "));
}
//normals
else if(lines[i].substring(0,2).equals("vn")) {
float info[] = float(split(lines[i], " "));
}
else if(lines[i].substring(0,2).equals("f ")) {
String info[] = split(lines[i], " ");
//sometimes obj's have different indices for different properties separated by '//'
int info1[] = int(split(info[1],"//"));
int info2[] = int(split(info[2],"//"));
int info3[] = int(split(info[3],"//"));
//obj has 1 based indexing and we need 0 based indexing
}
}
}
}

//write the data in binary
void writeSrf(String name) {
try {
DataOutputStream dos = new DataOutputStream(new FileOutputStream(name));
dos.writeInt(srf.size());
dos.writeInt(faces.size());
PVector v;
for(int i=0;i<srf.size();++i) {
v = srf.get(i);
writeFloat(dos,v.x);
writeFloat(dos,v.y);
writeFloat(dos,v.z);
v = norm.get(i);
writeFloat(dos,v.x);
writeFloat(dos,v.y);
writeFloat(dos,v.z);
}
int[] f;
for(int i=0;i<faces.size();++i) {
f = faces.get(i);
writeShort(dos,f[0]);
writeShort(dos,f[1]);
writeShort(dos,f[2]);
}
dos.close();
} catch(Exception e) {
}
}

//write a float value little endian
void writeFloat(DataOutputStream dos, float f) throws IOException {
int i = Float.floatToIntBits(f);
dos.writeByte(i & 0xFF);
dos.writeByte((i >> 8) & 0xFF);
dos.writeByte((i >> 16) & 0xFF);
dos.writeByte((i >> 24) & 0xFF);
}

//write a 16-bit integer little endian, integers larger than 65535 will get truncated
void writeShort(DataOutputStream dos, int i) throws IOException {
dos.writeByte(i & 0xFF);
dos.writeByte((i >> 8) & 0xFF);
}


//load mesh object
/*
bytes 0-3 = number of vertices
bytes 4-7 = number of faces
vertex = 6x4 bytes position followed by normal
faces = 3x2 bytes as unsigned 16 bit indices
*/

var xhr = new XMLHttpRequest();

if (xhr.status == 200 && xhr.response) {
} else {
}
}
}
// Open the request for the provided url
xhr.open("GET", url, true);
// Set the responseType to 'arraybuffer' for ArrayBuffer response
xhr.responseType = "arraybuffer";
xhr.send();
}

//get number of vertices and faces
vbo.numVertices = numVertices;
vbo.numFaces = numFaces;
//put that data in some arrays
vbo.vertexData = new Float32Array(buffer,8,numVertices*6);
vbo.indexData = new Uint16Array(buffer, numVertices*24+8, numFaces*3);
//push that data to the GPU
vbo.vertexBuffer = gl.createBuffer();
gl.bindBuffer(gl.ARRAY_BUFFER, vbo.vertexBuffer);
gl.bufferData(gl.ARRAY_BUFFER, vbo.vertexData, gl.STATIC_DRAW);

vbo.indexBuffer = gl.createBuffer();
gl.bindBuffer(gl.ELEMENT_ARRAY_BUFFER, vbo.indexBuffer);
gl.bufferData(gl.ELEMENT_ARRAY_BUFFER, vbo.indexData, gl.STATIC_DRAW);
}


And to draw the data

function draw() {
//... some gl drawing stuff up here

gl.bindBuffer(gl.ARRAY_BUFFER, vbo.vertexBuffer);