We’ve just released a new Processing 2.0 library for exporting OBJ and X3D mesh files. And it supports color meshes! Now you can export color models for 3D-printing with same commands you use to draw. Get started here: OBJExport library page.
The library works like any PGraphics, such as the PDF library. Simply call
to get the PGraphics for obj export and use regular Processing drawing commands.
I’ve also started to use GitHub to manage the code for this and other projects. You can find the github page for the library here. This page can be used to peruse the source, fork the project or report bugs.
This library is a re-release of an old library I made years ago. I had sort of forgotten about it until I got an email last week from Michael Zoellner of the Interactive Design program at Hof University. Apparently, people are still using it! He wanted to update the library to be compatible with the new Processing release.
At the same time, we’ve started to do some work color 3d printing. However, to get printable models we had to go through some tedious processing MeshLab. So, I took this opportunity to overhaul the library, adding new features like color exporting and fixing some old bugs.
We’re excited to see what the community does with the library. We’re using it to make color 3d-prints, what are you going to do? Try it out and send any feedback to email@example.com
For a while now, we’ve been wanting to switch our online apps from Java applets to HTML5. Applets are simply an outdated technology with a much clunkier, inelegant user experience. Finally, in the last two weeks, we’ve had the time to dig in and start porting our apps. Not only is the technology superior for developing HTML5 apps, but we’ve also learned a lot since we first released customization apps in 2007. We decided to tackle the most complex one first, the Cell Cycle app. In this post, I’ll go over some of the tech we used and some of the hurdles we encountered.
One of the unfortunate aspects of PJS is cannot directly leverage the great community that has grown up around Processing and developed so many powerful libraries to extend Processing. There is no easy way to use Processing libraries in PJS, and in many cases, it is non-trivial to port libraries to PJS. So the first thing we had to do was remove any library dependencies from our code, which included controlP5 and PeasyCam. We replaced these with our own code. This wasn’t so bad, since the app needed a UI overhaul anyway.
the old Java version
The original Java version of the Cell Cycle app had a number of problems. The interface was cluttered; it was buggy; you couldn’t save and share models; you couldn’t subdivide cells on the inside of the bracelet; etc. We took this rewriting as an opportunity to fix and add a bunch of features. Originally, there were three different “view modes”: 3D, smooth, and 2D. We combined all of these into one. The model is always smooth (more on this next) and with the extra screen real estate of the browser, we can show 2D and 3D views simultaneously.
The new version autosizes! Previously there was a “radius”, which corresponds a parameter in the mesh generation but has no relation to the user. It doesn’t correspond to the inner radius or outer radius of the final piece. We updated this version so the user specifies the interior diameter of the final piece. The code then iteratively resizes the piece until the interior dimension is approximately correct. This way a user can specify a size and no matter what they do to the model afterwards, it remains the same size.
Permalinking! It is practically necessary to have permalinks to user generated content on the web these days. If you can’t tweet something or post it to facebook, it might as well not exist. Not only can you link to a model now, but you (or anyone) can continue editing. Instead of saving the mesh that gets 3D printed to our server, we save an abstract representation of the current model state. The actual 3D mesh is reconstructed upon loading. Not only does this allow geometry to be smartly reloaded for editing, but also makes the models much lighter. This saves space, bandwidth, and load times.
The biggest amount of development we had to do was optimization. Things have to run fast in the browser. The majority of processing power goes towards generating the mesh and passing that info to the GPU with webGL. Displaying large, static models using openGL is easy. You load up a model into a VBO once, and that’s it. You can display millions of triangles at really fast frame rates. All the work is done in a preprocess, so you don’t have to worry optimization. However, when you have a large, dynamic mesh that is constantly changing, optimization becomes important.
One thing we decided is it couldn’t be completely smooth all the time. The meshes are smoothed through Catmull-Clark subdivision. The models that we send to the printer have two subdivision steps preformed on them. This is simply too much computation to be done in real time. Instead, while the model is moving we only perform one subdivision. When the model settles and stops, which happens pretty rapidly, we perform a second subdivision and store that in a VBO. Until it moves again, we do not update the VBO.
Our new reaction collection includes 3dprinted pendant lamps created by means of Selective Laser Sintering. The Spiral lamp (below) is covered by ridges and valleys that transmit different amounts of light when illuminated; they furnish a striking pattern whether the lamp is on or off. We orchestrated a pattern that twists elegantly towards the base of the lamp where it terminates in a gentle spiral. Lines diverge and converge along the contours of the sphere, blanketing the surface in many deep grooves. We think the pattern recalls the forms of sand dunes and hard corals.
The seed lamps play with reaction-diffusion at different scales to produce an organic effect. A simple sphere grows into a complex sculpted surface by layering reaction patterns at a micro and macro scale. The larger scale pattern creates the overall topography of the lamp while the smaller scale modulates the surface thickness to reveal a cellular texture when lit. In seed#1 (first lamp above), the patterns at both scales are cellular, however the surface is punctured only according to the disposition of the smaller scale. We were inspired by microscopic images of seeds where both the overall shape of the seed and the cells of which it is composed are visible
In seed#2, the macro and micro scale patterns each have a distinct character and they interact to create a pattern of perforations limited to the valleys of its landscape.
The lamps were all generated using software we created in the open source programming environment Processing that simulates reaction-diffusion. The video below shows the generation of two seed lamps.
Reaction-diffusion (RD) has become one of the most canonical examples of complex behavior that emerges from a simple set of rules. RD models a set of substances that are diffusing, or spreading; these substances also react with one another to create new substances. This simple idea has been suggested as a model for a diverse set of biological phenomena. All kinds of animals from fish to zebras display interesting color patterns on their skin and shells which play important roles in their behavior. However, the underlying cause of these patterns is still not understood. In 1952, Alan Turing suggested the RD system as an answer to not only this question but also the more general one of why cells differentiate. How do individual cells locate themselves in the larger scale structure and pattern of an organism? The patterns seen on the animals occur over a scale much larger than a cell, yet they display remarkable self-similarity on every part of the animal’s body.
Turing studied the behavior of a complex system in which two substances interact with each other and diffuse at different rates. He proved mathematically that such a system can form stable periodic patterns even from uniform starting conditions. One of the most interesting things about RD is that you can have a homogeneous system where every cell is doing exactly the same action (for instance just producing a certain amount of some chemicals); but from this one process a large scale structure emerges.
The diagrams below show a simple RD model. There are two substances. One, the activator, increases the synthesis of both itself and another substance, the inhibitor. However, the inhibitor locally inhibits the production of activator. This simple interaction is enough to generate the patterns shown below.
more pieces for our show are arriving! here’s a peak at one of the lamps we designed. we’ll do a real post on the ideas and code behind the creation of the reaction pieces sometime soon….I promise. The short of it is we created the lamp in Processing and it was 3dprinted using Selective Laser Sintering in nylon plastic. We varied the material thickness to create an intricate effect when illuminated.
The form is generated through a simulation of reaction-diffusion, a natural process that is theorized to be involved in everything from animal skin patterns to cell differentiation. For this lamp, we control the reaction through anisotropic diffusion. Anisotropic means that we varying the rate and direction of diffusion through space. This allows us to create a form that is at once controlled and organic.
This video shows a 2D reaction where the primary direction of diffusion is being varied by a noise function. The reaction is based on the Gray-Scott model , where one of the chemical concentration is being represented by the black color. The difficult part of this project was developing a controlled way to use reaction diffusion in 3D. Our aim was to create a pattern that would complement the spherical form and provide intruige in lit and unlit states of the lamp. Our solution involved crafting a spiraling reaction that terminates at the base of the sphere.
This lamp as well as more explorations of reaction-diffusion will be exhibited at Rare Device in San Francisco from September 2 to October 10.
“Written Images; a project in contemporary generative print design and art. Its final products will be a book that presents programmed images by various artists. Each print in process will be calculated individually – which makes every single book unique.”
Last night we made a Processing sketch for Written Images. Since we left our submission until the last minute, we decided to adapt an old project….due to my recent trip to the Aquarium, the barnacle sketch came to mind. Our main hurdle was figuring out how to tile our sketch to create the 4080×2720 px required for the book layout and stay under the 15 second time limit. Since we had a lot of geometry to draw, using the processing drawing methods turned out to be way too slow and we had to change the sketch to draw all triangles using OPENGL commands. This meant we also had to use OPENGL lights and control the camera via GL. I think the sketch came out ok, although I’d definitely like to work on it more when I get a chance.
Each execution of the program, a new random double curved NURBS surface is created for the barnacles to grow on. Colors range from yellow to pink based on generation of the barnacle, yellow barnacles randomly subdivide into pinker and pinker ones. The pores will also be open to different degrees between the different executions of the program. We also made a straighter version… incase the smoothed version doesn’t run fast enough on their computer.
Jesse is teaching a class this spring with the support of sprout. It essentially covers the topics of interest to us at Nervous System: taking computational models of natural systems and adapting them for design work. It is somewhat technically focused and is geared towards designers with some familiarity with programming/science or those with a science/programming background who want to learn and be creative. If you are in the Boston area, take it! Also, each session is made to be stand-alone (or in conjunction with one other week), so if you just want to attend one topic of interest you can. Each week will center around one simulation technique (and some additional geometry generating material), and we will work through and play with one or more example systems.