For a while now, we’ve been wanting to switch our online apps from Java applets to HTML5. Applets are simply an outdated technology with a much clunkier, inelegant user experience. Finally, in the last two weeks, we’ve had the time to dig in and start porting our apps. Not only is the technology superior for developing HTML5 apps, but we’ve also learned a lot since we first released customization apps in 2007. We decided to tackle the most complex one first, the Cell Cycle app. In this post, I’ll go over some of the tech we used and some of the hurdles we encountered.
One of the unfortunate aspects of PJS is cannot directly leverage the great community that has grown up around Processing and developed so many powerful libraries to extend Processing. There is no easy way to use Processing libraries in PJS, and in many cases, it is non-trivial to port libraries to PJS. So the first thing we had to do was remove any library dependencies from our code, which included controlP5 and PeasyCam. We replaced these with our own code. This wasn’t so bad, since the app needed a UI overhaul anyway.
the old Java version
The original Java version of the Cell Cycle app had a number of problems. The interface was cluttered; it was buggy; you couldn’t save and share models; you couldn’t subdivide cells on the inside of the bracelet; etc. We took this rewriting as an opportunity to fix and add a bunch of features. Originally, there were three different “view modes”: 3D, smooth, and 2D. We combined all of these into one. The model is always smooth (more on this next) and with the extra screen real estate of the browser, we can show 2D and 3D views simultaneously.
The new version autosizes! Previously there was a “radius”, which corresponds a parameter in the mesh generation but has no relation to the user. It doesn’t correspond to the inner radius or outer radius of the final piece. We updated this version so the user specifies the interior diameter of the final piece. The code then iteratively resizes the piece until the interior dimension is approximately correct. This way a user can specify a size and no matter what they do to the model afterwards, it remains the same size.
Permalinking! It is practically necessary to have permalinks to user generated content on the web these days. If you can’t tweet something or post it to facebook, it might as well not exist. Not only can you link to a model now, but you (or anyone) can continue editing. Instead of saving the mesh that gets 3D printed to our server, we save an abstract representation of the current model state. The actual 3D mesh is reconstructed upon loading. Not only does this allow geometry to be smartly reloaded for editing, but also makes the models much lighter. This saves space, bandwidth, and load times.
The biggest amount of development we had to do was optimization. Things have to run fast in the browser. The majority of processing power goes towards generating the mesh and passing that info to the GPU with webGL. Displaying large, static models using openGL is easy. You load up a model into a VBO once, and that’s it. You can display millions of triangles at really fast frame rates. All the work is done in a preprocess, so you don’t have to worry optimization. However, when you have a large, dynamic mesh that is constantly changing, optimization becomes important.
One thing we decided is it couldn’t be completely smooth all the time. The meshes are smoothed through Catmull-Clark subdivision. The models that we send to the printer have two subdivision steps preformed on them. This is simply too much computation to be done in real time. Instead, while the model is moving we only perform one subdivision. When the model settles and stops, which happens pretty rapidly, we perform a second subdivision and store that in a VBO. Until it moves again, we do not update the VBO.
Earlier today Marius Watz posted some thoughts on “computational cliches” aka overused, well-known algorithms that make pretty things without much work. I largely agree with the sentiment and general attitude, but I’m not 100% in line with the take-away. Watz and I pretty much exists on the opposite ends of the computational art/design spectrum. I am a bit of an algorithm purist. I like to deeply understand algorithms, mess with the core ideas embedded in them, and see the pure output. While some might question the authorship of incremental algorithmic innovation, I’m skeptical of taking something simple and making it look sexy with lots of lines and colors.
My version of Watz’s complaint is that people are now able (through Grasshopper or Processing libraries) to use algorithms they don’t understand. When this happens, algorithms control you rather than being a building block in your own creative process. Part of the draw of computational design is you aren’t limited by the tools made available by others. Anything is possible. However, this comes with the caveat that in order for anything to be possible, you need to put in the effort to understand algorithms to the level that you can use them innovatively.
A lot of these algorithms which have become cliche are extremely deep and powerful ideas. In his post Watz says
Yes, heavy use of standard algorithms is bad for you. That is, it is if you wish to consider yourself a computational creative capable of coming up with interesting work. If you’re a computer scientist or an engineer standard algorithms are your bread and butter, and you should go right ahead and use them.
I would say a computational creative should approach standard algorithms in the same manner as computer scientists or engineers. For instance, Voronoi/Delaunay triangulation is probably the most powerful tool in computational geometry. It is at the heart of a ton of really important algorithms. To me, it’s massive potential in science and engineering indicates that there are also infinite creative ways to extend and use this tool. That doesn’t mean you should generate a bunch of random points, Voronoi that shit, and call it your design. It means you should study these geometric objects, understand their properties, and use that to accomplish things you wouldn’t otherwise be able to.
Recently, my friend Shaunalynn Duffy asked me to give a talk at a Sprout event centered around Fungi. Specifically she was interested in the photographs of lichen I’ve taken over the past couple of years. I decided to speak a little about why I find lichen so fascinating and below you’ll find some of my thoughts in the matter. At the end, I include a brief aside on how our architecture should start being lichenized.
Why I like Lichen
Cladonia rangiferina, photographed in the Adirondacks, NY 07/11/2011
Lichen are strange conglommerations of two or more species from completely different biological domains of life. A lichen usually consists of a fungus and one or more photosynthesizing partners, usually it’s partnered with a single type of algae but sometime it can be paired with multiple types of algae or even cyanobacteria (photosynthesizing bacteria). Most fungi are decomposers, they feed on the detritus of other living things like leaves, soil and the dead. They live their lives in the soil hidden from observation except occasionally when their reproductive organs emerge for brief gaudy fits of spore dispersal. Fungi live inside their food. Their body is a huge absorptive mass of long linear branching cells called hyphae which provide them with a really big surface area to absorb nutrients from the soil. They typically do not build any complex structures, tissues or organs except when they need to reproduce; then they may build odd fruiting bodies like mushrooms to spread their spores. The shapes of these structures are adapted to their environment and it’s inhabitants who they must depend on to carry their offspring to new sites.
Ramalina menziesii, photographed at Drakes Estero in Point Reyes, CA 09/04/2010
But, lichen are different. When fungus partners with a photo synthesizer, it undergoes a dramatic transformation in physiology, chemistry, and life-style. While fungus is comprised of one large, simple structure of filamentous hyphae, lichen develop a myriad of complex structures to perform a variety of function. While fungus live as decomposers, the last step in a richly developed ecosystem, lichen are colonizers: some of the first organisms to enter desolate and dangerous environments such as toxic slag heaps. They have developed unique chemical pathways that can breakdown rock or oil.
Rhizocarpon geographicum, photographed at Glacier National Park, MT 07/15/2009
Some people describe a lichen as a fungus that has taken up farming, growing sugars in little algae patches throughout it’s body, in contrast to the decomposing habits of normal fungi. If fungi can be described as living inside their food, then lichen are essentially fungi turned inside out. Their food lives inside of them.
But I find it more intriguing to think of lichen as a fungus trying to be a plant. Like a plant it has photosynthesizing portions which produce food (the algae or cyanobacteria) but also structural components (the fungi) which protect and arrange the photosynthetic elements. As a photosynthesizing organism, lichen are under a lot of the same constraints as plants. They have to effectively collect sunlight and water. They have to be rooted to something, they have to resist gravity… Correspondingly, lichen have independently evolved very similar body plans to plants…..despite have an entirely different chemical and biological makeup.
crusty: photographed at Yellowstone National Park, WY 07/20/2009
leafy: photographed at Woodstock Land Conservancy, NY 12/24/2006
branchy: photographed at Yosemite National Park, CA 01/31/2008
Lichens tend towards three general body plans: crusty, leafy and branchy or crustose, foliose, and fruticose as they are usually called. But often a single lichen specimen may exhibit several of these morphologies as well as other less commonly seen ones. There are scaly lichens, powdery lichens and even gelatinous ones! But in all these body plans, the fungus must build transparent greenhouses of fungal tissue which protect the algae from UV light while displaying them in a way that allows light to be collected. They have to prevent the algae from dehydrating while allowing for carbon dioxide to diffuse into the algae during photosynthesis. These tasks are much more diverse from the normal role of fungal hyphae, which must simply spread out through the soil, exploring and absorbing food and this leads to much more morphological differentiation.
Cladonia Cristatella at the Woodstock Land Conservancy, NY
What about reproduction though? How can a lichen, which is actually a symbiosis of multiple types of creatures produce more of itself? Reproduction is complicated for lichen. Only the fungus can reproduce sexually, sexual reproduction for the algae is suppressed. The fungus produces spore dispersing structures. One of the most common seen among lichen are apothecia, these are the cup and disc-like growths you see in many of my photos. They can vary in size from under a millimeter to over 2cm. Sometimes they are the same color as the rest of the lichen other times they are dramatically color. Sometimes they are spread throughout the body of the lichen other time they protrude outwards on long stalks up to 1cm long.
left: lichen at Indian Lake in the Adirondacks, NY 7/21/2010
right: lichen at Fjordland National Park, New Zealand 08/28/2008
So the fungus produces these structures for the dispersal of spores but without an algal partner these spores can’t produce a new lichen. This means that if the spore lands somewhere where there happens to be some free living algae of the right species it might be able to lichenize and survive but most algae that form lichen can’t live in their environments outside of the lichen thallus (body). To get around this constraint lichen have developed several types of propagating units they can disperse that contain both fungus and algae. Also many can produce simply through dispersal of the lichen thallus; it a bit rips off and lands somewhere else, it may take establish a new lichen.
lichen photographed at Fjordland National Park, New Zealand 08/29/2008
towards a new architecture
Fungi have incorporated algae and other photosynthesizers into the structures they build to provide themselves with a dependable sun-powered energy source. They’ve become lichenized. We should too.
What would happen if our buildings became “lichenized”? As our knowledge of biotechnology and our environmental concerns continue to expand, we should take inspiration from the adventurous fungi and consider how we can better partner with photosynthesizers. How can algae or cyanobacteria be used in architecture to provide energy for our building systems? The self similar body plans of lichen and plants already inform us of many solutions to problem of how to compactly array photosynthetic cells to the sun. Research into biophotovoltaics or “Microbial solar cells” suggests we may be able to harvest electricity from photosynthesizing cells directly in addition to producing a range of useful chemicals and fuels.
Photosynthesizing surfaces offer many benefits over photovoltaic cells. The a biofilm of algae is a self-organizing, continuously growing system; hence it can self-repair leading to less maintenance and greater performance over a longer period of time. Photosynthetic systems have intermediate energy carriers which means energy can still be generated in the dark. Photosynthetic systems have a vibrant aesthetic appeal, by adding them to our buildings and cities we’d be “greening” them.
So what happens when our built environment becomes lichenized? As with the fungus whose structure and lifestyle change so dramatically, how can our buildings, urban planning, and society change when we are freed from the grid of infrastructure that currently supports, but also limits us.
Interested in this? Here are some articles I found interesting. Please send more my way if this is your area of interest because I would love to learn more.
We have been working on the system we used to create the Xylem line in preparation for making a new collection of 3D printed pieces, including jewelry, lighting and furniture. We are extending the functionality and improving the performance of the simulation to take the designs from 2D cut pieces to fully 3D ones. For a bit more of an explanation of some of the things I refer to in the post, see the original paper that this system is based on.
First and foremost is making the system work in 3D. The original system uses a Delaunay triangulation to determine source neighborhoods. This allows the creation of closed cells. Closed cells are not only desirable for us aesthetically, but necessary for improved structural stability when we are making 3D prints. Having a bunch of unconnected branches would be too weak for functional plastic pieces. Computing Delaunay triangulations becomes much harder in 3D, so we switched over to C++ in order to use CGAL, an open source library of computational geometry algorithms. The other major challenge is producing the 3D surface itself from the vein data. For our previous 3D printed line, we created a simple mesh skeleton and smoothed it using subdivision surfaces. This was possible because we had a very orderly surface. However, the venation patterns are much more chaotic and it is harder to determine how things are connected. Instead, we used an implicit surface technique. This means defining a function everywhere in space which essentially returns 1 inside the surface and -1 outside the surface. A common method for computing a surface for such a function is called marching cubes. However, CGAL has an implementation of a much more sophisticated algorithm which produces higher quality meshes. We also use a CGAL implementation of Axis-Aligned Bounding Box Trees to do quick intersections and projections to a boundary surface that we import to constrain the growth. CGAL kills many birds with one stone.
These are the basic elements for a 3D growth. By projecting onto the surface as the system grows, we can also create growth on a surface as seen in this test cuff we made.
That is just the beginning of the changes we’ve been working on. We were also dissatisfied with the character of the growth. Generally, the algorithm does not produce one of the main things we would expect from venation, a primary central vein with secondary veins perpendicular to it. Everything just tends to grow out at the same time, causing veins to be more parallel then we would like. This contrasts with the real theory of how veins form based on a positive feedback mechanism between flow and flow rate. The primary vein starts to form first, then secondary and tertiary veins. To attempt to inject some of that hierarchical logic into our system, we changed the growth rules. Instead of veins growing in the average direction of all the sources flowing to them, veins grow probabilistically, the more sources flowing to a vein the more likely that vein is to grow. This anticipates which veins become primary or secondary and grows them first.
One problem with probabilistic growth is it is inherently much slower. Each step of the simulation takes just as long, but instead of all the veins growing only a small percentage grow. This led to an idea to make the simulation much faster. The way we determine which veins a source flows to is we temporarily insert that source into a Delaunay triangulation of all the veins and look at the neighbors within that triangulation. This is a computationally costly step. The thing to note is that only new veins that we add in each step of the simulation can possibly be added to or change a sources neighborhood. This is because the veins and sources never move, so adding a new vein could never cause an old vein that was not previously a neighbor to become one. Instead of keeping track of a global triangulation that gets bigger and bigger with every step of the simulation, we only keep track of the local neighborhood for each source. Since the size of the neighborhood of each source stays small, the number of new veins added in each step stays small, and the number of sources is constantly decreasing, this means that as the simulation progresses it actually gets faster (in practice it slows down sharply until the number of veins added normalizes and then gradually speeds up). The key element is to keep track of the neighborhood of each source. The relative neighborhood of veins the source actually flows to is insufficient. That information cannot be used to figure out which new veins are neighbors. Instead we let each vein represent a half plane that points away from the source. Any vein that is within this half plane cannot be the sources neighbor. It is blocked by the vein that defines that half plane. We need to keep track of the minimal set of such veins. Essentially we are defining a convex shell around each source, outside of which nothing can be a neighbor of the source. Maintaining that data structure is a similar problem to linear programming, though because the problem is in a low dimension and we incrementally add each point, it is efficient to take a fairly brute force approach. Long story short, this turns out to be much more efficient. Allowing us to create probabilistic growths faster than we used to make deterministic ones.
Delaunay triangulations in 3D are even more computationally difficult than in 2D. Using CGAL, it takes us several hours to make one growth. So applying this new technique in 3D is even more powerful. It is more difficult to keep track of the convex shell around each source, but it is still much faster. We have not finished implementing this yet, but we have made a video of our preliminary results which looks pretty promising. The next step is to insert this into our framework that uses CGAL to create the 3D mesh.
Sorry the video quality is so poor. It does not seem to compress well.
Another aspect I would like to work on is growth on a surface. Right now, we apply the 3D algorithm with sources confined to a surface and we also project each vein onto the surface. This is fine for simple surface, but for surfaces on which there is a large difference between the distance along the surface and the distance through space this can cause a problem. I would like to use geodesic distances to compute neighbors instead of distance in 3-space. This presents a lot of challenges which we are still working on.
This entire exposition would be a lot clearer with a few nice diagrams, but that is a little more in depth than I can muster right now. Stay tuned for more results and new designs with this system.
a brief except from the introduction Jesse wrote for our Easton Pribble Visiting Artist lecture at Pratt MWP last week.
“This talk is going to be partly an exhibit of our work, partly a science lesson, and partly a discussion of where we think computational design is going. But what do we mean by computational design? In short, it means creating computer programs as part of the design process. This goes beyond using computer programs as a tool. It is computation as a medium. It isn’t just automating something you could do by hand, like drawing a thousand lines, but doing things that really only make sense by writing software. It is new, and we’re still trying to understand it but computation is a medium for making things. Programming is a very explicit process. Nothing happens without you telling the computer to do exactly that thing. In some ways, it is the most verbose and articulate creative process there is. Sometimes people might ask us, ‘Why do you work this way? Why do you use computation?’ But for us it is not really a choice. What we do is integrally linked to our interest in computation and biology, and if we weren’t making things this way, we probably wouldn’t be designers.”
Our Reaction show starts in San Francisco in a few days. Throughout the course of the next month, we will be doing a number of posts on the reaction-diffusion system and its scientific and mathematical basis. Today’s post was originally going to be titled “top 5 best tropical fish” …. but who can stop at five… You can find these pictures and more in a gallery I curated on flickr here.
Intricate and colorful, the 2d skin patterns of fish are one of the only examples where we can observe Turing waves in vivo. The skin patterns of some fish change throughout their growth sometimes even into adulthood allowing for the dynamic nature of reaction diffusion to be observed over time. Scientific studies of the emperor angelfish and the zebrafish have given striking evidence that reaction diffusion (or some mathematically analogous process) accounts for the dramatic shifts in pattern that occur over the fish’s lifespan. Here are some striking examples of reaction diffusion patterns in situ.
The juvenile emperor angelfish (left, photo by Doug Anderson) displays a particularly intriguing radiating stripe pattern. This pattern eventually converts to the one you see in the next photo. As the fish grows, the pattern “unzips” along the Y branch points that form to maintain an even distance between stripes. Eventually, this results in an adult fish where the stripes are evenly distributed with no branch points.
The puffer fish below are closely related species, yet they display very different patterns! Since they are closely related, it is likely their patterns have a similar molecular basis. The responsible chemical mechanism must be able to account for the dots, stripes and polygons exhibited. Reaction diffusion systems have just this property; producing dots, stripes, polygons and combinations thereof when given different parameters.
Boundary conditions like the eye of the fish tend to determine stripe directionality. For the Acanthurus lineatus (below left) and the young Arothron mappa (below right) this results in the pattern orienting perpendicular to the boundary. In other fish like this blowfish, the pattern may orient parallel to the eye boundary instead.
Reaction diffusion can also account for more complicated patterns like these. On the left is a Sailfin Tang whose dense dot and stripe pattern overlays a larger macro scale pattern of stripes. On the right a Napoleon Wrasse whose swirling pattern shrinks in scale markedly as it moves away from its eye.
These photos were taken from a diverse group of photographers on flickr, click each image to visit their photostreams. Interested in reading more about reaction diffusion experiments involving fish? I’ll be posting a review of some interesting experiments soon. I also recommend the website of the Kondo lab which has many of their papers available as pdfs.
I was watching an interview with Chris Anderson on television where he was talking about the economics of free. Essentially the economics of free is just an extension of the idea of the service economy, which is a strong part of postmodern ideas about production. Instead of focusing on selling products our economy is now centered on selling services. Take bookstores for example. A bookstore used to be a place you could go, and there would be a bunch of books you could buy. It was about selling books as a product. Now with online retailers like Amazon (and soon freely available scanned books), there is no way a bookstore could complete on selling books alone. So bookstores now have to switch to offering something else, offering an environment. Cafes have become a standard figure in major bookstores like Barnes and Nobles. They create a place you want to hang out, a place you can pick up a book and a cup off coffee and sit down for a while. You go to a bookstore for how good the browsing experience is.
The way this comes back to the free economy is that instead of selling a product, you give it away to entice people to use a service. This is the model for many open source software companies. The software itself is free, but what you charge for is technical support, custom extensions, etc. People use the software because it is free, and they buy your service because they have the software. However, for many online business (news, software, etc) the product is free, and the only service they provide is advertising. Google has made their entire fortune this way.
Now of course, I think free software is a great thing and Google is a great thing, but there is an inherint contraction occurring. Free online services both depend on advertising for revenue and render it obsolete. Services like craigslist, yelp, and blogs advertise products in a better and more efficient way than traditional advertising could ever dream. Organized consumer reviews are a way better advertising system than blind (or somethings not so blind) ads. I go on yelp and ask what is the best chinese restaurant near my current location. I can find out if something that looks good is actually a disappointment. I can even get recommendations on what dishes to order. No billboard could do a better job. Blogs spread news and info on new products in a much more efficient manner than buying ad space in a magazine could. Because people now have the ability to express and communicate there opinions so effectively, advertisements are obsolete. It seems like the only people they are good for are large companies with inferior products. As more consumers become techno-savvy, and we come up with better ways of organizing and sharing information this will only become more true.
But what happens to these free services we have come to expect. We will not suddenly start paying for the news again. There are a few options. One option is my analysis is wrong. Adversing remains valuable in some scenarios and can continue to be a source of revenue especially as advertisements become more targeted and effective. Another thing that could happen is that businesses will have to find another source of revenue. Just as the open source projects have support to provide, other businesses will have to find services they can provide. Additionally advertising essentially puts a price tag on attention; however, it is certainly not the value from attention. Business may find other creative ways to monetize attention. Finally, traditional businesses that depend on advertising could break down. In the news, this is already starting to happen. While investigative journalism will always be an important job, the majority of the news is up to the minute breaking news. Professional journalists are not required to report this type of information. Blogs are starting to take the place of newspapers and television in this area. For breaking news you do not need a trained writer or investigator. What you need most is someone familiar with the situation. Why should we pay to send someone to Serbia to report on something they just heard about, when we can just as easily hear from the people who live there? Bloggers often directly benefit from attention independent of any advertising because what they write about directly effects them (or its simply something they want to do anyway).
I am certain that there will be a depreciation of the value of advertising, but how exactly this will play out and how it will effect the free economy will be interesting.
Thursday night, Jesse and I went to see the Murakami exhibit at the MOCA Geffen. At the time something about the exhibit rubbed me the wrong way. The work was pretty much what I expected. Nothing wowed me; overall my emotions were a cross between bored and superficially titillated. But I have been thinking about the exhibit for following two days and its impact on me has grown. The true measure of an exhibition is ultimately that it makes you think.
MOCA’s Murakami is an exhibition showing work spanning the artist’s career, showcasing large scale sculptures, paintings, animated shorts, and commercial products. The mix of work creates a complete spectrum from pop art to mass produced consumer products. Pop art itself has always been especially commercial, taking themes, styles and inspiration from “low” culture and translating them to sell-able multiple edition art works (think Warhol’s images of soup cans and Hollywood icons). Murakami takes things one step further by actually mass producing his characters as vending machine toys, stuffed animals, stationary and more. Where Warhol takes existing cultural icons and shows them in a new light; Murakami plays a similar game to those he is supposedly criticizing. While his early character DOB appears a Mickey Mouse clone (or perhaps bizzaro doppelganger) his newer works create completely original worlds of unique characters who are then used and reused, merchandised and licensed for advertising much like Disney or other commercial mascots.
But what do these mascots represent? Murakami’s worlds and their inhabitants are so cute that it’s sickening and sometimes so sickening that its cute. They are supersaturated both with color and detail. The works are high contrast and Technicolor. Their detailing has a fractal quality where each character or object presents us with a bewildering array of attachments, each with its own sets of eyes and even smaller set of attachments proceeding on and on to infinity. in this way they each form their own complete universes. in some cases the rooms of the gallery were set up to greatly intensify this effect. one room feature an all over wallpaper of his daisy characters, a huge circular painting of the same pattern but larger and then a spherical sculpture that was the 3d realization of these characters. Nevertheless, even when in 3 dimensions these characters were still 2 dimensional, just a relief on a spherical ball. The complete and totalizing nature of these universes is only reinforced by Murakami’s consistent stylistic perfectionism. Every line is crisp and every surface smooth; there’s no shading or blending.
hmm. seem to have lost my train of thought. But I would also like to note that there is a lot of religious symbolism/overtones mixed in with the psychedelic consumerism.
The past decade has seen the rise of the network paradigm in the understanding of many facets of our society, from online communities to scientific analysis. The idea of networks serves both as a model with which to view the world and an organizational principle. I was first introduced to subject by Waldrop’s Complexity which covers some of the early scientific work defining the field and focuses on networks and dynamic systems. The scientific community was the first to realize the importance of networks as they proved a constructive way to view many phenomena. Communication networks, biological pathways, the spread of disease, and economics are just a few examples of fields whose study has greatly benefited being viewed as networks. Studying these systems, scientists introduced the principle of network organization, which is not based on a hierarchy of control but on mutual connections between entities. Thus it is described as “horizontal” and “self-organizing”.
The business world followed science’s lead adopting networks and complexity as a new business principle. Terms like self-organizing and horizontal structure quickly became buzz words. Some businesses are considering replacing rigid chains of command with models that allow independence and natural synergy to encourage innovation. In such systems, individuals have more freedom and can better use their creative energy as a productive force.
Activists and political thinkers also are starting to use the paradigm of networks. Hardt and Negri’s Empire describes a world dominated not by a single hegemony, but by a network of powers, and thus advocates the need for a networked resistance to that power. Old models of political organizing through unions and political parties are fading and being replaced new forms of organizing advocated by a younger generation. In many cases it appears as a network of individuals normally independent but able to quickly form into a large force for direct action. One example of this organization is the street medics of the activist community. These are a group of people who provide medical assistance during large protests. They have no formal structure outside of these large events, but quickly form a very organized group which provides medical help to thousands of people in crisis situation. This group was also one of the first to provided assistance to New Orleans after Katrina before national agencies were able to respond.
And finally, we see the emergence of artificial social networks such as MySpace and Facebook that have become important parts of our generation’s social lives.
But what does any of this have to do with design and craft? In all of these examples, there is a sense of the importance of networks and the desire to use these networks productively. People think networks are a powerful thing, whether its in business, politics or science; however, it is difficult to find a conscious effort to create a network structure that has been successful. Unsurprisingly, most businesses remain largely hierarchical. Corporations are generally not based off of voluntary work (by choice but not free), so they require a hierarchy of bosses and managers telling those below what to do. These kind of coercive relationships are harder to maintain in a network structure. Despite some successes, the activist community has still not become organized regardless of the method used. And as we all know, Facebook and MySpace do not serve any productive purpose what-so-ever.
But independent design is emerging as the first area to employ networks to its own advantage. In the last few years, social networking sites that cater to independent designers like Etsy and Stylehive have flourished. Design blogs have also taken a prominent role in the promotion of up and coming designers. These trends are a sign that the design market is turning away from a hierarchical structure based in galleries, shows, magazines, and museums. Instead there is a network of websites and blogs that serve to promote and market designers.
Because consumers can find producers without the aid of intermediate institution, designers can sell directly and be independent in a way that was formerly impossible. During my childhood, my parents were involved in the craft community. They made most of their sales by traveling to craft shows. But there is only a select community which goes to those kind of shows; it is very isolated. The ability to sell without an intermediary has allowed the growth of a new type of craft community. Since designs do not have to be approved by some entity which has its own taste and image, quirky handicrafts are one of the biggest benefactors of this new system. They seem to comprise the majority of sellers on Etsy and are often featured in giant blogs like BoingBoing.
Of course, we are still only seeing the beginnings of a change in the design market, and there are many questions. How will this progress? Do we see the weakening of old institutions like galleries, craft shows, museums, etc? Why is design one of the first areas to adopt the model with success? Can other areas, commercial or otherwise, learn from this experience for instance: science, education, and small manufacturing? And there are many more things we can ask about how this system does and/or should work. These are questions to think on and perhaps write on later.