Procedural Shading for Architecture:
Adoption, Fabrication, and Implications
Dr Matthew Lewis, BA, BSE, MSc, PhD.
ACCAD, The Ohio State University, Columbus, USA.
e-mail: mlewis@accad.osu.edu
Abstract
While the use of generative modeling processes has
become well established in architecture for creating experimental forms and
volumes, there is significantly less creative usage of procedural techniques
for specification and control of localized variable surface qualities such as
color, reflectance, pattern and deformation. The field of computer graphics has
a long history of developing advanced processes for algorithmic specification
of such shading properties in virtual environments for film and video games,
but there has been minimal adoption of these techniques in architecture. This
paper considers approaches for making this shift feasible, motivations for
adopting such techniques, and conceptual implications and opportunities.
Contemporary generative modeling tools permit interactive parametric modeling
via recursive hierarchies, iterative traces, particle and cellular artifacts,
as well as myriad additional techniques.
While some architects are adopting algorithmic approaches to form
generation, there are few comfortable points of entry into procedural shading
that don’t assume a great deal of mathematical and graphics programming
knowledge. This paper presents an effort to contextualize the core concepts of
procedural surfacing into an architectural framework, mapping relevant computer
graphics constructs into a lexicon more aligned with experimental architecture.
Extrapolations from current 3D physical fabrication technologies are considered
in the context of the performative capabilities and evaluation of generatively
produced surfacing properties with intelligent localized behavior. In addition
to modulating attributes such as color, density, or displacement across a
surface in response to curvature, structural proximity, program, etc.,
integration with emerging responsive technologies that permit reactive light, color,
and sound are also mentioned. Finally, a number of theoretic research
directions emerging from the above concepts are introduced. The meta-design of
spaces of procedural surfaces/materials is discussed, as well as their
visualization and navigation via interactive evolutionary design. The radical
shift from explicit representations of discrete forms and materials, to the
specification of implicit surface properties in terms of localized differences
(as mandated by both virtual representation and physical fabrication) is
considered in terms of Deleuzian metaphysics. Finally, the issue of pedagogical
strategies for integrating interdisciplinary theory is raised.
The field of
3D computer animation over the past two decades has produced myriad techniques
for generating complex interactions between geometric form and the visual and
functional properties of surfaces and volumes [2]. Software and techniques for
specifying color, reflectance, opacity, density, pattern, etc. (“shading”) are
routinely used in feature films and contemporary video games. Effects and games
designers make creatures, machines, vehicles, and buildings arbitrarily
transform with the adjustment of a few parameters, making them more menacing,
futuristic, or ethereal. While the
tools used by architects exploring the frontiers of form design continue to
advance (particularly in interfaces for parametric design) there has been
relatively little adoption of the procedural shading and surfacing techniques
from the computer graphics field.
Naturally
speculative investigations of form can be independent of the visual surface
properties of materials. Explorations using specific materials often dictate
homogeneous surfaces with more or less uniform qualities. But as always,
constantly developing fabrication technologies point to the eventual
feasibility of physically outputting arbitrarily heterogeneous construction
components. Designing surfaces that can
vary their qualities across a surface (or through their volume) in response to
form, time, or environment is not simple. As is still the case with parametric
form design, learning to use procedural shading techniques and tools can be
particularly challenging. Texts, tutorials, and courses almost uniformly assume
a context and language of film or game production. More often then not the
degree of knowledge of programming and mathematics assumed represents a
significant hurdle for non-computer scientists.
There are
currently a wealth of books explaining specific software interfaces, equations,
and technical details [1][6][7][9][16]. Most of these also abstract their
software tutorials and example implementations into more generalized
techniques. Only rarely do they then venture into more theoretical concepts.
That is, technique descriptions are presented assuming detailed knowledge of
technical/mathematical process, and concepts in turn assume comprehension of
these techniques. There is a tendency
for how to precede what and only occasionally be followed by why.
Having identified this, a reversal is proposed: to move from concepts, to
general techniques, to specific technologies. It is hoped this will enable more
rapid and efficient evaluation at a high-level, as is discussed in the
conclusion.
The words surface
and surfacing are used distinctly in this context from questions of
geometry and modeling. They rather refer to the visual properties that modeled
geometry takes when rendered (or perhaps eventually, when fabricated). Note
that shaders, the small programs that define surface qualities, can also
modify the shape of geometry, sometimes very drastically. This offers both some
problems and opportunities for architecture. One can generate images showing
much more complex forms than can easily be interactively modeled. This is problematic for 3D output however.
We will always be able to render and visualize forms that only show surfaces,
without the full geometry necessary for fabrication being representable in
memory. This remains a somewhat odd concept: distinguishing between a virtual form
that is “actually” represented, as opposed to a form that only appears to be
fully represented.
This section
collects and assembles significant concepts, techniques, and technologies in
three subsections. The first will present key terminology, beginning a concise
cross-disciplinary lexicon for procedural shading from architectural concepts.
Groupings of primary shading techniques are then developed using this
terminology. The section concludes with a brief survey of technologies in which
these techniques can be implemented and used.
Despite the
relatively recent introduction of interactive tools for procedural shader
authoring, the educational situation required for adoption is in need of
improved interdisciplinary terminology. A typical starting place involves
learning the vector math necessary to navigate descriptions looking something
like "calculate the dot product of the forward-facing normal and the cross
product of the negated normalized vectors from the eye and light".
Most
interactive 3D design proceeds in explicit stages of modeling objects,
specifying the properties of those objects, positioning them within an
environment, graphing their changes over time, etc. A procedural design
paradigm can also take this approach: a software algorithm uses rules,
functions, and iteration to algorithmically complete each of the above tasks.
For example, generate tree geometry, distribute copies of them around the
hillside, paint their leaves green, and make them sway in the wind. Regardless
of internal representations (e.g. NURBS, polygons, etc.) design largely
proceeds in the way we commonly think of our physical world: collections of
objects and their (animated) properties.
By contrast,
specifying procedural shaders requires a mental shift to a world model more
analogous to perhaps particle physics – requiring an implicit paradigm where
objects are barely acknowledged. Instead the properties of a single localized
point are considered, generally in isolation, sometimes with reference to its
immediate neighbors. This is represented as a massively parallel simulation
with each point determining its properties simultaneously and independently.
Instead of explicitly drawing a red circle on a blue surface by indicating a
position and radius for the new shape to be drawn, each point that will be
visible on the surface simultaneously considers, “am I close enough to that
point over there? If I am, I’ll make myself red, otherwise I’ll just stay
blue.” Object identity becomes an emergent perceived property of the behavior
of collections of coalescing fragments. Shifting to this implicit
representation is a core concept of procedural surfacing [1].
The surfaces
and volumes to be visualized have a potentially unlimited set of qualities.
A few (potentially intersecting) classes of qualities will be considered. Local
qualities include physical/formal traits at a specific point, e.g., color,
position, light emission, shininess, or, translucence. These qualities are
largely independent of other locations and can be considered in isolation,
potentially in a distributed fashion. A second category of qualities of a
location is qualities dependent on knowledge of qualities at other locations.
Such regional qualities include visibility, illumination, curvature,
orientation, or whether it is in the interior or near the profile. Note that
most if not all of these qualities could be treated as binaries (up/down,
(un)seen, lit/dark) or as continuums. There are also highly subjective
qualities. Traits such as whether something appears organic or mechanical,
futuristic, or feminine are impossible to universally formalize, and yet
arbitrary mappings of parameter values to other formal qualities is routine.
For example, increasing a “retro-futuristic” parameter could be made to cause a
surface to become more metallic, rounded, shiny, and glowing.
Having
considered the above qualities, a significant number of techniques will rely on
concepts of difference. For example, transitions from one
quality to another are manifested in many ways: the manner of blending, the
velocity and accelerations of changes, the way shapes intersect with or without
discontinuity. Repetition as a fundamental concept appears in patterns
based on tiling, subdivision, and branching. Most importantly, the
(ir)regularity of difference within a pattern is continuously adjustable. This
is usually not an instancing operation: iterative copies/instances, but is
rather a property of a point in space. Synchronization then can be
perceived between transitions of different qualities.
An additional
cluster of concepts revolves around relationships in the
environment. Most techniques have one or more elements of proximity at
their root: differences in qualities that emerge based on distance and
perceived groupings are controlled with synchronized quality transitions. Openings
through surfaces and volumes create connections between separated spaces.
Finally, direction is perhaps second only to repetition as a key
building block for procedural shading. Directions that are frequently useful at
the surface point being considered include the direction to the viewer, to a
light, or away from (or along) the surface. Again continuums between binaries
are commonplace: toward/away, collide/blend, attract/repulse, influence/ignore.
The concepts
identified above prototype a framework for experimentation and discussion, but
they are far from a comprehensive list. They are familiar terminology for most
spatial design contexts and are useful in describing techniques for procedural
shading such as those below. Having only a few pages limits the scope to naming
and describing techniques rather than providing tutorials. It is hoped that
collecting and contextualizing them into an accessible overview of core ideas
will facilitate further investigation of the processes found to be appropriate
and useful.
Emerging from
the language of the above concepts are the techniques with which procedural
shading can be approached. There are a number of books containing detailed
methods for various aspects of algorithmically creating and manipulating
surface properties. One way of grouping many of these follows, using the
conceptual framework above.
A number of
techniques relate to the assembly of the infrastructure of a
surface. As in most design domains, a substantial amount of effort goes into
analysis of the sub-components needed by several hierarchical processes of decomposition.
Surface qualities are usually divided into realms of “color”, “illumination”,
or “displacement”. At this point, common language of computation aligns with
that of physical construction: decisions must be made regarding what is
“lightest or cheapest” rather than “heavy and expensive”, in this case with
respect to computation resources. For example, high-resolution, pre-computed
texture maps might be found to much more “expensive” than procedural shading
options. Displacement is almost always much “cheaper” than equivalent geometry,
but it can be difficult to work with from an aliasing perspective in many implementations.
Bump mapping is generally cheaper and easier, but it can yield poor visual
appeal under many circumstances.
The hierarchy
of a surface’s qualities is often constructed in layers, with each layer
consisting of some form of element repeating in a pattern.
Elemental forms such as bumps, cracks, or shapes usually have their qualities
determined by proximity-based transitions. Surface locations might become
yellow within a certain distance of a location, but become transparent and
black outside of that range, forming a circle. Patterns of such elements are
not created by iteratively drawing one, followed by another, as is commonly the
case with procedural modeling. Rather, a location on a surface whose qualities
are being determined has its position remapped from a global space into a more
local coordinate system, given a desired frequency of repetition (perhaps
varying). These patterns of collected elements can be accumulated in layers,
placed over one another. The means of combining them might be as simple as
averaging or dissolving between each layer’s values, or using more complex
synchronizations such as weaving or cloning [10].
Usually once
established, a pattern-based shader is subjected to different manipulation
techniques. Displacement is used to fold, twist, and deform the rendered
geometry, adjusting the form in specific directions with controlled
transitions. In addition to moving sections of geometry, material removal
is simulated by manipulating opacity to create holes, cracks, and slices,
further sculpting surfaces, and creating openings. Additional visible geometry
is sometimes accumulated by rendering volumetric hypertextures with
techniques like “ray marching”.
Each of these
techniques raises the distinction between the actual geometry modeled, stored
in memory, and rendered, versus the rendered apparent geometry that might be
made to look completely different. This becomes particularly critical when
fabrication is considered below.
Perhaps most importantly, qualities of nearly all techniques can be
controllably altered by gradually increasing irregularity using
parametric “noise”. This allows arbitrary aspects of repetition to be concealed
as desired. Injecting irregularity gradually quickly reveals that it is much
simpler to move a surface toward apparent chaos than to bring random qualities
into organized alignment.
Finally, there
are techniques related to the behavior of the procedurally
defined surface. How do the surface qualities transition in response to the
properties of other qualities (e.g. does cracking increase with curvature? Does
the floor appear more, or less shiny in areas of high traffic?) An interface
for controlling shading behavior is created via parameters. Appropriate
parameterization will allow interactive control over all qualities of features,
patterns, and layers. Qualities can be set uniformly for all locations at once,
or they can vary across sub-regions of a surface (e.g. based on proximity,
direction, noise, etc.) Techniques are then used to vary specific qualities
based on other potentially complex surface properties making such traits
dynamically responsive. Such synchronized mapping techniques in which
one quality of the form, environment, or surface drives another (usually
requiring value scaling and offsets) are the glue of procedural shading.
Not only the
shaders are structurally hierarchical; these parameter mappings usually are as
well. There are often a mix of high-level parameters that may allow many
qualities to be easily modified by changing one number, and low-level
parameters that allow minute adjustments to specific (sometimes obscure)
qualities. Setting reasonable ranges for parameters can be extremely difficult,
particularly as they begin to interact. A collection of parameters, along with
their ranges of values, forms a space of possible design “solutions”. These solution spaces are usually “biased”
and sculpted to primarily contain desirable designs.
Surfaces can
ultimately be made reactive to their environments, and potentially, even
seemingly nondeterministic. Improvisation of behavior becomes feasible as
choices, transitions and elements are chosen based on arbitrarily specifiable
criteria (again: usually based on proximity, synchronization, direction, etc.)
Designing such behaviors can quickly become an exercise in knowledge
representation: implementing design principles such as what color to use in
what context, or in what regions of a surface should bumps/cracks/dirt appear
to form. Such “intelligent” local behavior can mimic the decisions a designer
might choose if they were painstaking painting the surface by hand according to
context and knowledge.
What knowledge
can and can’t be encoded parametrically and algorithmically? Formal design and
physical qualities (e.g. color theory, selective surface aging) are much more
feasible to encode than associative traits requiring recognition and “common
sense”. Regardless, such generative design opportunities become increasingly
needed as the potential complexity of virtual environments capable of being
represented grows. Adopting strategies for procedurally modulating the
performance of surfaces (interior and exterior) is required when manually
texturing and surfacing every point in a generated world loses viability.
Homogenous qualities no longer suffice in complex, responsive environments.
Once
parametric solution spaces have been developed, different approaches can be
used for exploring and searching them for desirable solutions. Most commonly,
an interface is provided where individual parameters can be manually adjusted.
This allows for an exhaustive (and potentially exhausting) means of traversing
a high-dimensional space of solutions, analogous to taking slow and careful
steps down one avenue after another. An
alternative approach is the use of interactive evolutionary design techniques.
In such approaches, a relatively large number of (initially random) solutions
are evaluated, with more attention given to re-combinations and variations of
the “best” solutions found [10]. Visualization and navigation via such genetic
approaches is more like sending a crowd to search a city: the trade-off is the
requirement to constantly evaluate and compare each searcher’s findings, as
well as the control lost in granting each (unintelligent) searching agent autonomy.
The
organization of techniques described here has been largely independent of
specific software packages, hardware, etc. Most could even be deployed in
traditional media, if labor were not a significant issue. The next section
discusses specific technologies that provide varying degrees of accessibility
and relevant implementations.
Procedural
shading techniques have been in existence for over twenty years, but it is only
recently that interfaces are emerging to make them more accessible. Languages
for authoring custom shaders were for the most part the domain of RenderMan
compliant renderers, although Mental Ray has also been an option for the more
technically inclined. While it is quite difficult for non-programmer artists
and designers to write raw shading code, systems have made advances in
accessibility by integrating node graph interfaces (e.g. Pixar’s Slim, and both
Mental Ray and RenderMan integration with Maya’s Hypershade interface [9][15].)
One slowly
emerging advance is the ability to develop procedural shading interactively
without needing to software render to evaluate results. Real-time programmable
graphics processing units (GPUs) are increasingly offering massive parallel
processing capabilities allowing the techniques above to be generated and
modified at interactive rates [7][16].
Driven by rapidly advancing video game technology, support for authoring
real-time procedural shaders is being integrated into most 3D animation software,
although fairly high-end graphics card requirements remain. Architecture,
performance, and procedural shading are slowly coming together in the
technology emerging from collisions between VJ performance software, 3D
real-time graphics, and responsive environment installation. Software like Jitter
and Processing allows synchronized, reactive mappings between motion, networked
data, projected video, and sound [3][8]. These technologies are enabling real
changes in visible and aural qualities of actual physical surfaces and volumes
to adapt in response to dynamic sensed environments.
The
potentially most significant frontier however is physical fabrication of the
surface and volume qualities resulting from the above techniques. While
expensive rapid prototyping approaches have long existed in the manufacturing
world in many forms, the shift from computer simulation to physical embodiment
is quickly becoming more accessible. Routine use of very high-end processes
like laser sintering and stereolithography are generally cost prohibitive for
most form designers with machines costing hundreds of thousands of dollars, and
material costs being problematic as well. “Desktop 3D printing” however has
become affordable in recent years, with costs down to tens of thousands of
dollars and using materials like starch or plaster combined with established
inkjet-based technology. Most relevant to the procedures described here, full
color printing is now rapidly emerging in low-end RP machines [19].
Indications
are that the technology will increasingly move toward point-by-point material
customization for each unit of volume. As color quality increases, hopefully
many of the other local qualities described above will become available as
potential parameters: variable material density, opacity, reflectance, etc. As
one reads of new experiments with inkjet technologies, e.g. printing conductive
or biological surfaces, one has to wonder whether procedurally mapping varying
physical properties such as electrical resistance, light and heat sensitivity,
etc. will come to be as easy as computer controlled pigment mixing currently
is, and what such capabilities will enable. These seem extremely
probable when compared to the full visionary push-button custom manufacturing
future “Diamond Age” being sought by slowly advancing, cutting-edge
technologies such as NanoFactories and programmable matter [12][14][17].
In the
meantime, current technology capable of full color printing can make use of
procedural shading techniques in a few ways. The surface qualities generated by
procedural shading can be converted to texture maps. This can either simply be
local color, or the effects of virtual environmental lighting (e.g. incident
light, shadows, reflections, highlights, etc.) can be included in the generated
texture. This texture can then be used when sending the geometry to a 3D color
printer. Procedurally produced surface displacement can also be printed, by
converting the rendered geometry into “actual” polygonal geometry (again, with
corresponding color and lighting qualities if desired.) Conversion of
procedural displacement to polygonal geometry also allows other existing output
technology pipelines to be used (e.g. laser cutting, etc.) Finally, normal
mapping, in which a 2D texture map is produced encoding displacements as the
“height” difference between complex and simple versions of geometry, may also
be of use with other CNC methods. 2D painting and printing techniques might
then be useful for outputting color qualities of such a constructed height
field.
This effort is
one example of a common challenge in a highly interdisciplinary arts and design
academic research environment: generating strategies for making emergent
technologies accessible in academic non-programmer contexts. A frequent
lifecycle in our field is as follows: initially software implementing new
capabilities becomes available, but requires a programming background to make
use of it. Recent examples include new
real-time shading algorithms or a computer vision library. In a second stage, a programmer creates an
interface or tool that, although challenging and possibly requiring technical
understanding of its process and implementation, no longer requires programming
to use. In a final stage of development, new technology sometimes evolves into
an “off-the-shelf” form, adopting a proper interface, with tutorial books
available in every bookstore. It is this middle stage in which we most commonly
find ourselves.
A difficult
question for educators and students alike is then how to balance time and
resources between these three spaces to attain desired goals. One can choose to
focus attention (and substantial time) learning useful programming and
scripting technologies such as shader languages, Python, Processing, or MEL. On
the other hand, a majority of designers choose instead to use primarily
off-the-shelf software. They concentrate on acquiring repertoires of clever
tricks using the latest plug-ins and data filters, pushing the capabilities of
the endless stream of new commercial and open-source products. A middle ground
between these extremes would be the usage of interfaces for constructing
algorithmic processes by assembling node graphs in functional equivalence to
writing lines of code, using software like Max/Jitter, Virtools, Slim, and even
potentially Maya’s hypershade [3][4][9][15].
The most
significant factor guiding this decision may turn out to be how potential work
environments value and/or necessitate collaboration between specialists,
relative to generalists. An additional deciding factor for learning and using
procedural shading techniques specifically has until just recently been the
financial expense of the necessary software. Increasing numbers of open source
renderers, node-based interface alternatives, and even downloadable educational
versions of expensive commercial products are now making this less of an issue
however. Given a target audience of students from a number of disciplines,
current educational curricula and texts have focused on technique and
technology, with analysis of underlying concepts remaining somewhat neglected.
Recent collaborations with architecture have provided the impetus to
re-consider useful procedural shading ontologies specifically. When combined
with other disciplinary investigations, generative representations in general
have opened doors to broader theoretical considerations.
In our
interdisciplinary research center, interactions between faculty and students
from dance, theater, critical theory, art, computer science, and architecture
have been generating a set of parallel concerns that seem to appear and
disappear with specific projects and collaborations. They align differently
between and within individual disciplines, and especially focus on interactions
with technology. In particular, such
topics include recurring (re)interpretations of performance, complexity,
emergence, and perception. Each carries with it a sizeable body of
knowledge that is usually approached from a disciplinary perspective, and often
can only be briefly considered, as less theoretical concerns are attended to,
given the practical demands and constraints of technology and time [20].
Performance, for example, recently reappeared in an architectural context:
asking what does a given form do, how does it do it, and how well. An understanding
of what it means to act, as well as to evaluate both method and effectiveness,
seem to merit substantial consideration [13]. Developing pedagogical strategies
however for practically integrating such bodies of theory into more
interdisciplinary studio contexts remain a challenging problem.
A specific
example: after numerous mentions in readings and collaborations (unrelated to
architecture or procedural shading) it become evident that gaining some
familiarity with the philosophy of Gilles Deleuze would be useful. When
introduced to his metaphysics, several ideas resonated with concepts described
above [5][18]. In particular, as was mentioned in the beginning of the paper,
one of the greatest challenges for students when learning procedural shading is
the mental shift required to move from a virtual world of individual objects
distinguishable by their assigned properties, to an environment uniformly
emerging from tiny sub-pixel size locations. These “micro-polygons” from which
all perceived forms ultimately emerge, are colored and positioned based
primarily on descriptions of their localized differences (this form of spatial
description of qualities also aligns with the representation of virtual forms
required by rapid prototyping.) Not only does a shift in difference
appear to relate to the core of Deleuze’s metaphysics but it also relies upon
an alternative interpretation of repetition, analogous to the one
required by procedural shading. Once
again, the generally iterative conception of repetition used in computer
graphics (“draw ten rows of triangles”) with each element being generally
interchangeable, must be thought of differently when employing procedural
shading’s implicit representations. The repetition used in procedural shading
is not the tiled forms of repeating textures or limited categories of painted
surfaces, but rather patterns emerging from the parallel accumulation of
singular fragments.
So how to
value integrating such related theory into interdisciplinary studio contexts?
In the above case it initially seems unlikely that an analogy to Deleuzian
metaphysics will provide an easier teaching approach to learning how the
techniques or technology in question function. Such theories could aid with
questions of what the technology and techniques do, and why
however. Within the discipline of architecture, the example of Deleuze’s
metaphysics has seen extensive discussion in a number of contexts [11]. But
while the role of theory can be firmly established/determined within the
context of one individual discipline, there is minimal precedence in more
ephemeral and complex inter-disciplinary spaces.
This effort
can itself be viewed as a generative design project: the effects of shifting
between bottom-up and top-down approaches for teaching emerging technologies,
with increased emphasis on concepts and theory remains unsure. Iteratively
formalizing rules for responding to dynamic constraints within a design space
between art, philosophy, and science, and then “letting the system run” will
likely additionally yield unexpected outcomes. The difficulty of
conceptualizing this process within the framework proposed above makes it
tempting to turn the procedural techniques discussed towards concrete
comparative visualizations of the abstract possibilities. Might the behavior of
adaptive shaders be coaxed into revealing qualitative differences in alternate
regions of solution spaces? How might dialectic and rhetoric be mapped to
generative geometry and surface qualities? It seems feasible that abstract
process development could benefit if explicitly revealed via parametrically
models and even physical embodiment.
While
technologists usually visualize complex algorithmic systems like procedural
shading in a framework of inputs, outputs, and data filters, there are numerous
alternatives for structuring, framing, and evaluating related tools,
techniques, and concepts. A generate-and-test cycle for rapid prototyping of
competing pedagogical ontologies would appear ideal. Ultimately however,
one-size-fits-all learning solutions seem increasingly inappropriate. Given the
extent of access to new processes, information, and resources, ambitious and
industrious students can obviously now individually pursue infinite avenues of
inquiry. The most valuable approach for educators then might in turn require a
shift from conveying the most critical information, to teaching adaptive
strategies: concentrating on exposing students to contemporary methods for
broadening their awareness, focusing evaluation, and accelerating
hyper-efficient adoption of emerging techniques, technologies, and concepts.
References
[1] Apodaca,
Anthony and Larry Gritz. Advanced RenderMan. Morgan Kaufmann, 1999.
[2] Cook, Rob.
“Shade Trees”, In Computer Graphics, SIGGRAPH 84 Proceedings, 1984.
[3] Cycling 74. Jitter Software. http://www.cycling74.com/products/jitter,
2006.
[4] Dassault
Systèmes. Virtools software. http://www.virtools.com,
2006.
[5] Deleuze,
Gilles. Difference & Repetition. Columbia University Press, 1994.
[6] Ebert, David et al. Texturing and Modeling: a Procedural
Approach, Academic Press, 1994.
[7] Fernando,
Randima. GPU Gems: Programming Techniques, Tips, and Tricks for Real-Time
Graphics. Addison-Wesley, 2004.
[8] Fry, Ben
and Casey Reas. Processing Software. http://processing.org,
2006.
[9] Lanier,
Lee. Advanced Maya® Texturing and Lighting.
Wiley, 2006.
[10] Lewis,
Matthew. Creating Continuous Design Spaces for Interactive Genetic Algorithms
with Layered, Correlated, Pattern Functions. Ph.D. Thesis. Ohio State
University, 2001.
[11] Lynn,
Greg. Animate Form. Princeton Architectural Press, 1999.
[12] McCarthy,
Wil. Hacking Matter. Basic Books, 2003.
[13] McKenzie,
Jon. Perform or Else: From Discipline to Performance. Routledge, 2001.
[14] Nanorex,
Inc. “Productive Nanosystems: from Molecules to Superproducts”, video, http://www.nanorex.com, 2006.
[15] Pixar.
“RenderMan – The Tools”. https://renderman.pixar.com/products/tools/,
2006.
[16] Rost,
Randi. OpenGL® Shading Language. Addison Wesley, 2006.
[17]
Stephenson, Neal. The Diamond Age: or, a Young Lady’s Illustrated Primer.
Spectra, 1995.
[18] Williams,
James. Gilles Deleuze’s Difference and Repetition: A Critical Introduction and
Guide. Edinburgh University Press, 2004.
[19] Z
Corporation. Spectrum Z510 Full Color 3D Printing System.
[20] Zuniga
Shaw, Norah and Matthew Lewis, “Inflecting Particles: locating generative
indexes for performance in the interstices of dance and computer science”,
Performance Research 11(2), Taylor & Francis Ltd, 2006.