Redirecting
design generation in architecture
Dr Alexander Koutamanis
Faculty
of Architecture, Delft University of Technology, Delft, The Netherlands
e-mail: a.koutamanis@bk.tudelft.nl
Abstract
Design generation
has been the traditional culmination of computational design theory in
architecture. Motivated either by programmatic and functional complexity or by
the elegance and power of representation schemes, research has produced
generative systems capable of creating new designs that satisfy certain
conditions or reproduce exhaustively entire classes of known designs. Most
generative systems aim at a complete spatial design (detailing being an
unpopular subject), with minimal if any intervention by the human designer. The
reason for doing so is either to give a demonstration of the elegance, power
and completeness of a system or simply that the replacement of the designer
with the computer is the fundamental purpose of the system.
The ongoing
democratization of the computer stimulates reconsideration of the principles
underlying existing design generation in architecture. While the domain
analyses upon which most systems are based are insightful and stimulating,
jumping to a generative conclusion may be based on a very sketchy understanding
of human creativity and of the computer’s role in it. Our current perception of
such matters suggests a different approach, based on the augmentation of
intuitive creative capabilities with computational extensions.
Architectural
generative design systems can be redirected towards design exploration,
including the development of alternatives and variations. Human designers are
known to follow inconsistent strategies when confronted with conflicts in their
designs. These strategies are not made more consistent by the emerging forms of
design analysis. The use of analytical means such as simulation, coupled to the
necessity of considering a rapidly growing number of aspects, means that the
designer is confronted with huge amounts of information that have to be
processed and integrated in the design. Generative design exploration that can
combine the analysis results in directed and responsive redesigning seems an
effective method for the early stages of the design process, as well as for
partial (local) problems in later stages.
The transition to feedback support and background assistance for the human designer presupposes re-orientation of design generation with respect to the issues of local intelligence and autonomy. Design generation has made extensive use of local intelligence but has always kept it subservient to global schemes that tended to be holistic, rigid or deterministic. The acceptance of local conditions as largely independent structures (local coordinating devices) affords a more flexible attitude that permits not only the emergence of internal conflicts but also the resolution of such conflicts in a transparent manner. The resulting autonomy of local coordinating devices can be expanded to practically all aspects and abstraction levels. The ability to have intelligent behaviour built in components of the design representation means that we can treat the built environment as a dynamic configuration of co-operating yet autonomous parts.
Design automation has been exercising a growing influence on architecture for a number of decades. Its initial focus on heady subjects such as design generation and its grounding in academic research created substantial expectations. However, despite developing a number elegant prototypical systems and interesting insights, academic activities arguably failed to produce a deep influence on practice. The general democratization of information and communication technologies in the second half of the nineteen nineties proved a more substantial force. The wide availability of affordable computer power, coupled to the sheer modernity of computerization, meant a swift transition from analogue to digital in a growing number of practical aspects of architectural design. This transition, however, was generally unguided by knowledge of and experience in computational design methods and techniques. Consequently, the practical side of computing in architectural design remains a mere application of commercial systems. Moreover, it aims at base reasons vaguely linked to efficiency, primarily the production of drawings on paper. These systems are either general-purpose or superficially geared to architectural conditions but ultimately less knowledgeable than required. Both categories may boost efficiency in the production of design documentation (although experience reveals quite a few dark sides) but invariably fail to provide feedback, support and guidance. One may argue that by replicating the apparent modes of designing, the current forms of design automation in architectural practice cannot lead to improvement of design performance, increase in our understanding of the built environment or advances in the performance of the built environment itself.
Having been largely superseded by social and commercial developments, academic research responded in a number of diverging ways. Current foci include advanced technologies such as simulation and rapid prototyping, information dissemination and communication through Internet-related technologies, modest epigones of artificial intelligence and knowledge-based ideas, as well as the representation of complex and irregular design forms. The difference between the last subject and the others lies in its origin. While the others follow closely permanent directions in computational architectural research, modelling complexity and irregularity is stimulated by a relatively recent tendency in practice that goes under several names, including the doubtful cyber-architecture. This tendency concentrates on free-form surfaces: flexible, adjustable surfaces that can be interactively moulded into the desired form. In architectural terms it is related to both the expressionist revival of the nineteen fifties and high-tech ideas but derives more from advances in the computational representation of curvilinear forms. These originate from automotive, aeronautic and naval design where they were applied to aerodynamic and intuitive forms. In contrast to these industries, architecture has yet to utilize the full potential of such representations, especially with respect to the transition to construction [1, 2].
The leap from representation and analysis to
generation in free-form design is the logical next stage. The main reasons for
that is the mathematical background of the geometric constructions used and the
desire to apply dimensional or relational constraints that complement the
designer’s intuitive sculpting of the form. This is consistent with the general
priorities of architectural computerization so far. Design generation has been the traditional
culmination of computational design theory in architecture. The ability to
synthesize designs with little or no intervention by a human operator implies
complete and unambiguous understanding of a particular aspect or a class of
design problems. Consequently, generation has been motivated by programmatic
and functional complexity, as in space allocation systems [3], or the elegance and predictive power of representational
analyses, as in shape grammars and rectangular arrangements [4-7]. Such generative systems are respectively capable of
producing new designs that satisfy certain conditions, for example proximity
measures derived from the brief, or of reproducing exhaustively entire classes,
such as all possible Palladian villas, comprising known and plausible new
designs [8, 9].
Most generative
systems aim at a complete spatial design, usually a floor plan layout.
Detailing has attracted some attention but never became a popular subject. The
human user (designer) is allowed minimal scope for intervention and guidance,
ranging from seed choice to termination of a process. There are two alternative
reasons for doing so. The one is to give a demonstration of the elegance, power
and completeness of a (normally deterministic) system. User intervention in
such a system does not normally go beyond elementary initial choices (default
values and peripheral variables). The other reason is that the replacement of
the designer with the computer can be the fundamental purpose of the system.
This generally relates to the designer’s apparent information-processing
limitations, as well as the perception of his role as mere intermediate and
facilitator in prescriptive processes. In other words, the design problem is
deemed either already resolved by the generative system or too complex for the
human designer.
From a
historical perspective generative systems based on the morphological
introspection of free-form design attitudes fall directly under the category of
representationally-motivated systems. Moreover, they cannot be expected to
improve on the design performance of prior systems, which build on extensive
analyses of their subject matter. These form a knowledge-component that cannot
be rivaled by simple geometric parametrization schemes and the transformability
of computer-based representations. Moreover, the approach underlying
morphological generative systems must be correlated with current attitudes and
priorities. The ongoing democratization of the computer and the resulting
emphasis on information and its processing stimulate reconsideration of the
principles underlying design automation in architecture, including:
·
Combination and integration of different
aspects rather than
concentration on a single one that dominates or encompasses the others. This is
a logical consequence of the increasing number and specificity of requirements
on the built environment and the proportional increase of pressure on
architectural design performance.
·
Descriptive approach to design analysis and
synthesis (as
opposed to proscriptive and prescriptive approaches). This means that the human
designer re-emerges as the sole responsible for decision taking and hence also
for information processing.
The domain
analyses upon which most generative systems are based are undoubtedly
insightful and stimulating but the leap to a generative conclusion has almost always
been based on a very sketchy understanding of human creativity and of the
computer’s role in designing and creativity. Our current perception of such
matters suggests a different approach, based on the augmentation of intuitive
creative capabilities with computational extensions. This is probably most
evident in the area of analysis.
The two main
contributions of architecture to social development in the twentieth century
have been the satisfaction of quantitative needs in the built environment and the
qualitative improvement of buildings. The quantitative dimension is well known
and publicized. A familiar example is the housing shortage in the
reconstruction period following the Second World War and the related
development of industrialized building. The qualitative dimension is frequently
underplayed but equally prominent in e.g. Modernist design thinking in the same
period. Nevertheless, the rise of consciousness concerning the quality of
working and living conditions after the reconstruction period has been a
growing problem for architectural design. The explosive growth of programmatic
requirements on building behaviour and performance meant an increase in
informational complexity that was beyond the methodical and technological
possibilities of conventional design approaches. While for instance Modernist
thinking could integrate most such requirements in simplistic dogmatic
aesthetic formulations of the relationships between form and function, the
post-war client and authority demand proof of conformity and satisfaction.
Under these
conditions design analysis grew to provide the answers required by the brief
and building regulations, as well as design guidance by means of feedback. In
order to achieve this with simplicity and abstraction, analysis made use of
proscriptive and prescriptive systems. Proscriptive
systems comprise rules that determine the acceptability of a design’s formal or
functional aspects on the basis of non-violation of certain constraints. Formal
architectural systems such as Classicism and Modernism, as well as most
building regulations are proscriptive systems. Prescriptive systems are in a sense reversals of proscriptive ones.
They suggest that if a predefined sequence of actions has been followed, the
design results are acceptable (i.e. satisfy the requirements that determine the
choice of these actions). Many computational design approaches are prescriptive
in nature and motivation [10].
The extensive
use of proscriptive and prescriptive approaches in design analysis is closely
related to the domination of formal systems (styles) in architecture. The
acceptance of a formal system as the current norm means that most requirements
can be expressed in either a proscriptive or a prescriptive manner. Further
exploration of significance and consequences of a design decision or action is
by definition superfluous. The eclectic spirit of recent and current
architecture reduces the value of normative approaches, as it permits strange
conjunctions, far-fetched associations and unconstrained transition from one
system to another. In addition, the computer provides means for analyses and
evaluations of a detailed and objective nature. These dispense with the
necessity of abstraction and summarization in rules and norms. This does not
mean that abstraction is unwanted or unwarranted. On the contrary, abstraction
is an obvious cognitive necessity that emerges as soon as a system has reached
a stable state. Consequently, one can expect the emergence of new abstractions
on the basis of the new detailed, accurate and precise analyses. It is quite
probable that several older norms will be among the new abstractions.
The main
characteristic of the new forms of analysis is that they follow an approach we
may call descriptive. They evaluate a
design indirectly by generating a description of a particular aspect comprising
detailed measurable information on the projected behaviour and performance of
the design. This description is correlated with the formal representation of
the design in two ways: as source of input to the analysis and as the framework
for the presentation of analysis results (feedback). This permits interactive
manipulation of the analysis / synthesis tandem, e.g. for trying different
alternatives and variations. The close correlation of analysis with synthesis
also facilitates design guidance and reduces the danger of trial-and-error
redundancy in design exploration. In short, the descriptive approach
complements (rather than guides) human design creativity by means of feedback
from which the designer can extract and fine-tune constraints.
In functional
analyses it has become clear that most current norms and their underlying
principles have a very limited scope, namely control of minimal specifications
by a lay authority. They are often obsolete as true performance measures and
grossly insufficient as design guidance. The solution presented by the
descriptive approach is the substitution of obsolete abstractions with detailed
information on functionality and performance. For example, Blondel’s formula of
stair sizes (2 x riser + tread = step length) can be abandoned in favour of an ergonomic
analysis of stair ascent and descent by means of simulation [11]. The analysis is performed in a multilevel system
that connects normative levels to computational projections and to realistic
simulations in a coherent structure where the assumptions of one level are the
subject of investigation at another level [12,
13]. The same multilevel system also links stair analysis
to analyses of other, related subject such as fire escape (egress).
The move of
design analysis towards a new paradigm based more on simulation than on
abstractions derived from legal or professional rules and norms is more evident
in environmental aspects. Recent developments in areas such as scientific
visualization provide advanced computational tools for achieving high detail
and exactness, as well as feedback for design guidance. The integration of
photorealistic and analytical representations clarifies and demystifies the
designer’s insights and intuitions. Moreover, the combination of intuitive and
quantitative evaluation offers a platform of effective and reliable
communication with other engineers who contribute to the design of specific
aspects, as well as comprehensible presentation of projected building behaviour
and performance [14]. Simulation and scientific visualization also expand
the possibilities of architectural control to invisible aspects, such as indoor
climate, that have been so far treated summarily by normative rules [15].
Descriptive
analysis provides useful insights into the causes and effects of various design
problems but also poses new problems that may impede the integration of
analysis results in designing. One such problem is that, in contrast to
proscriptive or prescriptive analysis, it may add substantial amounts of
information to the already unmanageable loads the designer must handle. A
second problem lies in the coherence of cues analysis provides for the further
development of a design. Descriptive analysis fares in this respect
significantly better than its proscriptive or prescriptive counterparts. Still,
the extent of design aspects that need to be consider imposes a heavy
intellectual burden on the designer. Transforming the emerging network of
factors and relationships into design guidance can be so complicated that the
designer may fall back on normative, abstract decision making.
The complexity
of integrating analysis and synthesis suggests that the relationship between
the two is not as direct as normative approaches have led us to believe.
Integration requires interpretation and transformation capabilities that are
apparent in the architect’s treatment of relatively simple problems but tend to
recede in the background when complexity increases beyond familiar sizes. One
solution to the integration of descriptive analyses in designing is through the
addition of two interface components that extend these capabilities.
The first is a memory component that provides
precedents to the design problem in hand. The precedents have a known behaviour
and performance. These are combined with a plausible attribution of factors that
influenced not only this particular behaviour and performance but also the
design direction taken. The memory component can be implemented as case-bases
of precedent designs. These designs provide an explicit source of design
information that can be matched to a design problem in terms of form, function
and performance. Comparison between precedents with a known performance and a
new design facilitate identification of design aspects that need be improved,
as well as of wider formal and functional consequences. Development of design
case-bases is no trivial task. Transformability in the representation of cases
and flexible classification in a database are critical to the identification
and treatment of a design aspect. Nevertheless, the state of the art in
case-based reasoning and the extensive corpus of analysed designs provide the
essential building blocks [16, 17].
The second
interface component comprises adaptive
generative systems capable of guiding exploration of design aspects. Such
exploration is applicable to both precedents and new designs. The aim of these
systems is to provide feedback from analysis to synthesis in a more structured
and creative manner than by the juxtaposition of representations. By exploring
the scope of the analysis and the applicability of the conclusions to more
designs, the designer generates a coherent and consistent collection of partial
solutions that explore a relevant solution space. Development of this component
poses more questions than the previous one, largely because of the
incompatibility of existing generative approaches. The redirection of
architectural generative systems towards design exploration, including the
development of alternatives and variations, aims at complementing the human
designer’s capabilities. Designers are known to follow inconsistent strategies
when confronted with conflicts in their designs. These strategies are not made
more consistent by the emerging forms of design analysis. The use of analytical
means such as simulation, coupled to the necessity of considering a rapidly
growing number of aspects, means that the designer is confronted with huge
amounts of information that have to be processed and integrated in the design.
Generative design exploration that can combine the analysis results in directed
and responsive redesigning seems an effective method for the early stages of
the design process, as well as for partial (local) problems in later stages.
The transition
from holistic and prescriptive generative systems to unobtrusive feedback
support and background assistance presupposes re-orientation of design
generation with respect to the issues of local intelligence and autonomy.
Design generation has made extensive use of local intelligence. For example,
every rule in a shape grammar encapsulates meaningful relationships between
design elements and schemes, which prove significant for e.g. the utility of
shape codes in image indexing and retrieval [18]. However, local intelligence has generally remained
subservient to global schemes that tended to be holistic, rigid or
deterministic. Such schemes are justified by the perception of spatial
representations as collections of atomic components linked to each other either
by basic binary relations or by abstract global schemata. This perception
appears to be still dominant in computer-aided design, despite early
identification of two main issues in symbolic representation:
· which primitives should be employed and at what level [19], and
· the possibility of units (chunks, partitions, clusters) more structured than simple nodes and links or predicates and propositions [20].
A conventional spatial representation such as a map or a floor plan comprises atomic elements such as individual buildings or building components. These elements appear at an abstraction level appropriate to the scope of the representation. Depending upon the scale and purpose of a map, buildings are depicted individually or are catenated into city blocks. Similarly, a floor plan at the scale of 1:50 depicts building components and elements that are ignored or abstracted at 1:500. Most other aspects of built form remain implicit, with the exception of those indicated as annotations by means of colouring and textual or symbolic labels which convey information such as grouping per subsystem, material properties or accurate size. Relations between elements, such as the alignment of city blocks or the way two walls join in a corner are normally not indicated —unless of course they form the subject of the representation, as in detail drawings.
Computer-aided design has largely adopted this structure. Drafting and modelling systems are based on the implementation structure of analogue architectural representations. They contain graphic elements representing walls, doors, windows, stairs and other building elements or components. A major but hopefully temporary irritation is that the representations are at the level of these graphic elements, i.e. lines and shapes, rather than of the represented objects. As a result, manipulation of the representation in the framework of e.g. design analysis is often unnecessarily cumbersome and time-consuming. On the other hand, academic research has considered extensively issues other than geometric information on shape, size and position of the components, notably the explicit representation of spatial primitives, spatial relationships and other entities relevant to the designer. Using formalisms such a semantic networks, frames, scripts and objects, academic research has produced associative symbolic which share the following features:
· a representation consists of objects and relations between objects;
· objects are described by their type, intrinsic properties and extrinsic relations to other objects;
· properties are described by constraints on parameters;
· relations are described by networks of constraints that link objects to each other.
Associative symbolic representations have been successful in the framework of highly focused generative systems where structure and intention can be controlled. More ambitious representations have attempted to integrate all relevant aspects and entities. Their main intention has been to resolve real design problems as encountered in practice. However, in most cases large or holistic representations have a size and exhibit a complexity that often render the representations unmanageable for both computers and humans. Problematic maintenance and lack of predictability in the behaviour of such representations, especially following modification and augmentation, severely limit their applicability [21].
An approach to reducing complexity and improving flexibility is based on the premise that spatial design representations are multilevel coordinated structures [22]. Each representation level corresponds to a different abstraction level and possibly to different design aspects. While each level can be used as a self-sufficient representation, the coordination of all levels offers the flexibility and comprehensiveness required for tackling intricate and extensive problems. Coordination of levels is based on the correspondence of elements, i.e. the existence and invariance of the same entities on multiple levels, as in the multiscale representations developed in computer vision [23-25]. Another possibility for coordination is that, rather than defining constraints on objects as fragmentary ad hoc relations, we can aggregate constraints into coordinating devices that operate in conjunction with elements, as well as independently. Such coordinating devices are either local and centred on elements or global and abstract. Examples of local coordinating devices are found in the constraint framework that underlies the positioning of architectural elements relative to each other. Global coordinating devices are often manifested as the grids, gratings and other schemata employed in typologic studies, comparative analyses and generative systems as abstractions of the overall spatial articulation of a design.
The frequent absence of meaningful explicit relationships between elements in architectural representations does not imply lack of knowledge on the subject. Architectural and building textbooks deal extensively with the relationships between building elements and components. The positioning of one element relative to another derive from formal, functional and constructional decisions and have consequences for the articulation and performance of the building. Textbooks provide guidelines ranging from ergonomically sound distances between chairs and tables to the correct detailing of joints in roof trusses. The frequent and faithful use of textbook examples has resulted in a corpus of architectural stereotypes. Even though stereotypes may lead the designer to repeating known solutions, they help reach levels of reasonable performance in designing and in the built environment. By obeying the underlying rules and reproducing textbook stereotypes the designer ensures conformity with the norms of building regulations, professional codes and general empirical conclusions.
A prerequisite to the computerization of such stereotypical configurations is thorough analysis of the formal and functional patterns they integrate in a single representation. The representation of such patterns is based on the hypothesis that, once the overlapping constraint networks are untangled, we are able to distinguish between properties intrinsic to an architectural element and wider relationships which focus on specific critical elements. These relationships form local coordination devices that apply to interchangeable elements, for example to different window or door types for a particular opening.
In textbooks aspects of a recommended configuration are usually presented separately in a proscriptive manner, by means of sub-optimal and unacceptable examples. These are annotated with the relevant relationships and usually ordered from general to specific and from simple to complex. It is assumed that the reader of the textbook makes a selective mental aggregate on the basis of the aspects that apply to the problem in hand. Despite that incompatibilities between different aspects and examples are seldom addressed in textbooks, forming an aggregate representation is a generally straightforward hill-climbing process. For example, in designing a door, one starts with basic decisions relating to the door type on the basis of spatial constraints and performance criteria. Depending upon the precise type, the designer proceeds with constraints derived from adjacent elements and activities. In the case of a single inward opening left hinged door of standard width, these constraints determine the position and functional properties of the door, i.e. the distance from elements behind the door, and the swing angle, orientation and direction which facilitate the projected entrance and exit requirements. These can be adjusted by other factors unrelated to the initial decision. For example, the existence of a load bearing element in the initial place of the door may necessitate translation of the door and hence a reformulation the initial design problem.
Similarly to textbooks drafting templates offer useful insights into the stereotypical interpretation of local coordination constraints. In templates building elements usually appear as holes or slits. Each hole or slit is accompanied by annotations in the form of dents, notches and painted text. These facilitate the geometrical positioning of a form, as well as the geometric interpretation of spatial constraints. The configuration of forms and annotations typically represents a simplified fusion of parameters reduced to typical cases. Even though the superimposition of different patterns makes the template less legible than the more analytical textbooks, the template comes closer to the mental aggregate of the designer.
The manner local constraints are centred on elements, the connections between elements and their stereotypical treatment in designing suggest that mechanisms such as frames or objects would be appropriate for the representation of local coordination devices. In a frame-based representation the relationships of e.g. a door with walls and other elements of the immediate context can be described as slots and facets which link the door frame with the frames of walls, spaces and other elements. Such an implementation strategy has obvious advantages for the representation of local coordination devices, for example with respect to the interchangeability of elements by means of abstraction and inheritance. It is quite plausible that a single prototype would suffice for the representation of all kinds of doors. This could facilitate the manipulation of doors in computer-aided design, including the automated substitution of one door type with another if needed due to spatial conflicts or to a change in the designer’s preferences. Another possibility is to distinguish relationships and constraints from elements altogether. By implementing elements and relationships / constraints with separate frames or objects it is possible to resolve a number of limitations in different techniques, e.g. by adding relationships other than IS-A in object systems and generalization / specialization to the links in a semantic net [26, 27].
The application of symbolic representation mechanisms also requires a visual component by which properties and relationships are expressed not only in spatial terms but also in terms of fuzziness, plasticity and interaction. Fuzzy modelling techniques provide this component without superimposing unnecessary simplifications and additional structures [28]. Relations to adjacent elements and local expressions of global coordinating devices are correlated to design decisions and actions without having to resort to constraining techniques such as geometric parametrization. Local relations to global coordinating devices are an issue of particular importance, as local devices can interpret global ones within their scope.
Local coordinating devices have three important
consequences for interactive and automated design generation. The first is the
addition of intermediate levels between atomic design elements and global
design ideas, principles or structures. These levels facilitate a number of
design activities, including detailed analyses, conflict identification and
resolution, and abstraction or specification of design decisions. The second
consequence is that local coordinating devices can be instrumental for focusing
and integrating interaction and generation. By propagating the consequences of
an automatically taken step or an interactive change in the design, they
identify the scope of each decision and indicate subsequent actions and actors.
The third and probably most far-reaching consequence is that they introduce the
element of autonomy in the behaviour of a design representation (or generative
systems). Configurations of elements and local constraints become autonomous
entities with a well-defined scope and orientation. As such they contribute to
the partial automation of design activities for the part that is integrated in
their autonomous behaviour, as well as with respect to relations between local
devices. Being transparent and controllable, they provide assistance for several design aspects at once and
at practically all abstraction levels without burdening the designer with
trivial problems. At the same time, they do not reduce the responsibility or
impede the creativity of the designer. On the contrary, they form an
intelligent, flexible background configuration that affords multilevel
interaction and responds with specified, knowledgeable feedback. Such a
configuration provides the means required for handling the complexity resulting
from the interpretation of the built environment as a dynamic configuration of
co-operating yet autonomous parts that have to be considered independently and
in conjunction with each other.
1. Burry, M. Handcraft and machine metaphysics. in Computers in design studio teaching. H.
Neuckermans and B. Geebelen, Editors. 1999, KU Leuven: Leuven.
2. Van Bruggen, C. Frank O. Gehry: Guggenheim Museum Bilbao. 1997, New York: The Solomon R. Guggenheim Foundation.
3. Eastman, C.M. ed. Spatial synthesis in computer-aided building design. 1975, Applied Science: London.
4. Steadman, J.P. Graph-theoretic representation of architectural arrangement. in The architecture of form. L.J. March, Editor. 1976, Cambridge University Press: Cambridge.
5. Steadman, J.P. Architectural morphology. 1983, London: Pion.
6. Stiny, G. Pictorial and formal aspects of shape and shape grammars. 1975, Basel: Birkhäuser.
7. Stiny, G. Introduction to shape and shape grammars. Environment and planning B, 1980. 7: p. 343-351.
8. Stiny, G. and W.J. Mitchell, The Palladian grammar. Environment and Planning B, 1978. 5: p. 5–18.
9. Hersey, G. and R. Freedman, Possible Palladian villas. 1992, Cambridge, Massachusetts: MIT Press.
10. Maver, T. Software tools for the technical evaluation of design alternatives. in CAAD Futures '87. Proceedings of the Second International Conference on Computer-aided Architectural Design Futures. T. Maver and H. Wagter, Editors. 1987, Elsevier: Amsterdan.
11. Mitossi, V. and A. Koutamanis, Parametric design of stairs. in 3rd Design and Decision Support Systems in Architecture and Urban Planning Conference. Part One: Architecture Proceedings. 1996: Eindhoven.
12. Koutamanis, A. Multilevel analysis of fire escape routes in a virtual environment. in The global design studio. M. Tan and R. Teh, Editors. 1995, Centre for Advanced Studies in Architecture, National University of Singapore: Singapore.
13. Koutamanis, A. and V. Mitossi, Simulation for analysis: Requirements from architectural design. in Full-scale modeling in the age of virtual reality. B. Martens, Editor. 1996, OKK: Vienna.
14. Koutamanis, A. Digital architectural visualization. Automation in Construction, 2000. 9(4): p. 347-360.
15. Den Hartog, J.P. A. Koutamanis, and P.G.
Luscuere, Possibilities and limitations
of CFD simulation for indoor climate analysis. in Design and decision support systems in architecture. Proceedings of the
5th International Conference. 2000, Eindhoven University of Technology: Eindhoven.
16. Riesbeck, C.K. and R.C. Schank, Inside case-based reasoning. 1989, Hillsdale, N.J.: Lawrence Erlbaum Associates.
17. Leake, D.B. ed. Case-based reasoning. Experience, lessons and future directions. 1996, AAAI Press: Menlo Park, California.
18. Koutamanis, A. Representations from generative systems. in Artificial Intelligence in Design '00. J.S. Gero, Editor. 2000, Kluwer: Dordrecht.
19. Brachman, R.J. and H.J. Levesque, Introduction. in Readings in knowledge representation. R.J. Brachman and H.J.
Levesque, Editors. 1985,
Kaufmann: Los Altos.
20. Brachman, R.J. On the epistemological status of semantic networks. in Readings in knowledge representation.
R.J. Brachman and H.J. Levesque, Editors. 1985, Kaufmann: Los Altos.
21. Gauchel, J. et al. Building modeling based on concepts of autonomy. in Artificial Intelligence in Design '92, J.S. Gero, Editor. 1992, Kluwer: Dordrecht.
22. Koutamanis, A. Multilevel representation of architectural designs. in Design and the net. R. Coyne, et al. Editors. 1997, Europia Productions: Paris.
23. Marr, D. Computer vision. 1982, San Francisco: W.H. Freeman.
24. Rosenfeld, A. ed. Multiresolution image processing and analysis. 1984, Springer: Berlin.
25. Rosenfeld, A. Pyramid algorithms for efficient vision. in Vision: coding and efficiency. C. Blakemore, Editor. 1990, Cambridge University Press: Cambridge.
26. MacKellar, B. and J. Peckham, Representing objects in SORAC. in Artificial Intelligence in Design '92. J.S. Gero, Editor. 1992, Kluwer: Dordrecht.
27. Peckham, J. B. MacKellar, and M. Doherty, Data model for extensible support of explicit relationships in design databases. Journal of Very Large Data Bases, 1995. 4(2): p. 157-191.
28. Horváth, I. et al. A Fuzzified geometric model to support conceptual design: mathematical, methodological fundamentals. in International Conference on Engineering Design, ICED 99. 1999, ICED: Munich.