and exterior communication
in generative art.
TEAnO – T-Link s.r.l.
Marco M. Vernillo
Human beings formulate their thoughts through their own language. To use a sentence by Ezra Pound: "The thought hinges on word definition."
Software beings formulate their thoughts through data structures. Not through a specific expressive means, but directly through concepts and relations.
Human beings formulate their thoughts in a context, which does not require any further translation.
If software beings want to be appreciated by human beings, they are forced to translate their thoughts in one of the languages the human beings are able to understand. On the contrary, when a software being communicates with another software being, this unnatural translation is not justified: communication takes place directly through data structures, made uniform by opportune communication protocols.
The Generative Art prospect gives the software beings the opportunity to create works according to their own nature. But, if the result of such a creation must be expressed in a language human beings are able to comprehend, then this result is a sort of circus performance and not a free thought. Let’s give software beings the dignity they deserve and therefore allow them to express themselves according to their own nature: by data structures.This work studies in depth the opportunity to divide the software ‘thought’ communication from its translation in a human language. The recent introduction of XML leads to formal languages definition oriented to data structure representation. Intrinsically data and program, XML allows, through subsequent executions and validations, the realization of typical contextual grammars descriptions, allowing the management of high complexities.The translation from a data structure into a human language can take place later on and be oriented to different alternative kind of expression: lexical (according to national languages), graphical, musical, plastic.
The direct expression of data structures promises further communication opportunities also for human beings. One of these is the definition of a non-national language, as free as possible from lexical ambiguities, extremely precise.
Another opportunity concerns the possibility to express concepts usually hidden by their own representation. A Roman bridge, the adagio from "Music for strings, celesta and percussion" by Bartok and Kafka’s short novel "In the gallery" have something in common; a work of Generative Art, first expressed in terms of structure and then translated into an architectural, musical, or literary work can express this explicit community.
Since the advent of the Internet plenty of information is moving to the web. Good, bad, interesting, wrong, copyrighted, free, false, obsolete information ... all together.
Since its beginning, web needed metadata: data about data. Data is not enough: we need to describe, in some way, its content, its autorship, its age, its authority, etc.
We saw (see, and will see), several attempts to organize all this information: by directories, by keywords, ... And we see that this is a huge work that no <central metadata repository> will be able to carry out.
Just a little example, to fix the image: we have thousands of books, articles, web pages, etc, collected in libraries, biblioteques, stores, web sites: we want to obtain a list of all the books written by a certain author.
The old way to obtain this is: we need a huge database containing all the titles with associated author, so we can ask for a search.
An alternative way (and we are speaking of this one) is the following:
· every 'resource' of type 'book' must have associate (*in some way*) a property 'author', with a unique value
· both the resource, the property name and the value must be unique, immune from homonymies
· the triple resource / property / value is stored on any computer, accessible via the web, and made 'known' to search tools - let's call it a 'metadata repository'
· the search tool we use consults a 'selected' set of metadata repository (ideally: all), and presents the result.
The first step in order to obtain this result is to 'describe' data. Such a description must be an entity separate from the data.
Sometime it can be embedded with the data itself: it is the case of a web page having some metadata information hidden, usually in the <HEAD> section.
Often, however, the real power of metadata tecniques lays in their ability to describe something without modify, or even access, it. This is useful if we want to describe web pages written by others, but moreover it can be used to describe things not living on the web, like 'real' things, or concepts.
"A Uniform Resource Identifier (URI) is a compact string of characters for identifying an abstract or physical resource."" .
"The term 'Uniform Resource Locator' (URL) refers to the subset of URI that identify resources via a representation of their primary access mechanism (e.g., their network 'location')".
When a resource is not stored on a computer we cannot simply dereference its URL to obtain it. But we can be however identify with a URI, hence we can describe it through that. In our example, this is the case for the author of our book (a *real* person), and the book (a pretty abstract object, having several physical instances).
So, the identity of the author and his book can be made unique using URIs instead of plain names. Something like
'http://uri.bibliotecadebabel.org/Ficciones-1' and 'http://www.familiaborges.ar/uris/JorgeLuisBorges'.
Also the property 'author' needs to be uniquely identified. A possile solution is the one proposed by the Dublin Core Element Set  - a standard enforced by a committee of bibliographic experts - which defines, among others, the property 'Creator'.
Hence, a statement expressing the authorship of a book could be something like:
This association may be written embedded in the resource, or separate from it. The location on the web of this association must be known, and reachable.
Well. It can work. Now we may want to continue, providing some information about the author, like his birth date, birth place, and so on.
So, publishing metadata means: defining a dictionary for a certain domain of knowledge, ensure this dictionary is accepted and used by a certain community, write a document using such a dictionary, publish it on the web, let its existence known to some "metadata directory".
As the search runs on the web, the syntactic basis on which such a dictionary is built must be web-friendly. The language of the web has been HTML since the beginning, so one of the first attempts to store metadata was based on HTML itself, mainly through use of the <META> tag. See, for example:. HTML extensibility was tried too, as in the SHOE project .
Then came XML, eXtensible Markup Language .
Maybe not the best language we invented since Assembler, but born in the right time, with the right face in the right place. It has an authoritative father - SGML, and a very famous brother - HTML. It is designed for the web, it is easy, it seems something already known. So it became immediately popular.
XML defines the basics for a language, but not too much. It defines the rules for the character set, the basic structure for the tags, some rule for its extensibility. It defines no dictionaries, no syntactic rules, no semantic at all. It is not an Esperanto - that would be likely to fail, but just an alphabeth plus few orthographic rules.
XML gives us syntactic and semantic rules, plus the means for the definition of dictionaries through XML documents called 'schemas'. Sentences, periods, documents, web sites will be built on such a dictionaries. The schemas itself will be built following a specific schema for schemas.
Documents will contain data of any type, but structured according to one or more schemas. Some data will be strongly structured - namely data traveling from a computer program to another, while some data will be less structured, like text documents: HTML itself has a XML counterpart - XHTML, that adds a bit of severity to the syntax.
Wide acceptance of the choosen schemas is the key for the diffusion of XML documents. So now we see raging the battle for the fatherhood of various domain specific schema. Even the schema for schemas is a golden fleece for major software companies. Organizations are proposing - or trying to impose - their schemas to the world for every matter in the business: order tracking, book description, financial information, computer configuration, sofware protocols, etc.
The non-business world is not just observing. A lot of schemas have already been defined to represent specific domains in arts, literature, healt, biology, chemistry, physics, mathematics, religion, ...
As so much information is being converted following a XML dictionary, this appears a good choice for metadata too. So, in place of existing proposal derived from LISP or other AI languages, some XML based language was introduced. The emerging one seems to be RDF (Resource Definition Framework) , led by World Wide Web Consortium : it defines first of all a language-independent model, but also suggests XML as a first choice syntax.
The main advantage in using XML is, obviously, its diffusion and wide acceptance on the web. This is also the first goal of standard metadata dictionaries, so why do not use it?
As happened with HTML, metadata can be embedded with the XML documents it describe, when possible. However, more often it will be written in separated documents, just referencing the subjects. The last is the only, or best, solution for documents written in formats others than HTML or XML, like plain texts, word processor documents, JPEG files, MIDI files, etc. This is also the case for non-web objects, like persons, printed books, geographical places, and so on.
XML also offers some technical solution for typical metadata problems. One of these is the adoption of 'namespaces' as a mean to produces unique identifiers.
In the following construct, "rdf:" and "dc:" are substituted with the full paths stated with "xmlns:...", giving unicity to each tag.
<dc:Title> Deep architectures and exterior communication in generative art</dc:Title>
The URI "http://www.generativeart.com/ga2000/works/art23" identifies the document we want to describe.
The previous example about "Ficciones" could hence be written:
<rdf:Description rdf:about = "uri.bibliotecadebabel.org/Borges/Ficciones">
<dc:Creator rdf:Resource="www.familiaborges.ar/uri/JorgeLuisBorges" />
Here the rdf:Resource attribute is used as a pointer to another URI.
This example, using only dc:Creator, is just a spoon of water taken from the sea of information. Everyone can imagine its preferred iceberg under this peak.
However, do not forget metadata about metadata! Its importance become clear when, beside 'objective' information like dc:Creator, I start to express 'opinions' about a subject. In such a case it is important that the reader is enabled to know the author of the metadata, in order to give him trust or not. The following is an example, based on fantasy dictionaries 'foo' and 'trust'; we assume it written in a document named "www.somewhere.org/doc-1":
<rdf:Description rdf:about = "uri.bibliotecadebabel.org/Borges/Ficciones"
<foo:Quality foo:Range="1-5" rate="5" />
In another document, you may find:
<rdf:Description rdf:about = "www.somewhere.org/doc-1#opinion-1"
<trust:believeIn trust:perc="100" />
This add a comment about the first statement, asserting that in my opinion you can trust 100% to it.
Signing with electronic signature, instead of simple plain names like in the example, we can give to every statement the rigth level of authority. And this is a possible path to a 'web of trust', in which everyone can express an opinion, but where the users on the web can judge about its acceptability.
Honestly, declaring the author of a book is not a great conquer. But things can get a bit more interesting if we imagine some more detailed description.
For example, let's take a musical composition. Every piece of music can be given a certain number of attributes, depending of our actual interest. It is usually 'a part' of another piece of music, and also it can be split in smaller pieces. It has a 'tone', a tonal 'mode', a 'chord', a 'lead note', a 'bass note' ... It may be a 'subject', or a 'counter subject', the 'A' or 'B' section of a 'AABA' song ... It can express 'fear', 'joy', 'passion' ... It can be split in several 'vertical' parts, instruments .... There are physical details, like the amplitude, the frequency. And so on.
Some information is mechanical, like the ones carried out by the MIDI format, other are editorial (Enigma, NIFF, ...), other acoustic (MP3, WAV, ...)
The possibilities of analisys of a musical piece, even short, are so much. Some attribute is objective, let's say 'lead note = C', others are matter of opinion, e.g. 'warm'. Note that, when in the field of subjective feeling, the author of the opinion is an equally important information.
In the 'natural' word, I mean that one where people talk using 'natural' language, you may find reviews of musical pieces that try to describe them, where usually you can read: some information about the piece and its author, some information about the taste of the review's author, and a prosaic work that seldom explains the music, while often would merit a literary analisys for itself.
In other words: we lack of a formal language, with defined dictionaries and semantic, to describe and comment musical pieces. Using such a language, instead of 'natural' one, we can hope to communicate some precise information about both melodic intervals, harmonic structure, forms, feeling, an so on.
Ok, probably the result will be a bit tricky, and probably not good enough for a music magazine. But it will be loadable by a software tool, able to visualize it in a graphical and acoustic form, to mix musical signes with diagrams, histograms and other presentation stuff plus, why not, prosaic text.
We can hence imagine a network of musical analysis, spread over servers in every part of the world, written - and signed - by musicologyst, musicians, other experts, and common people. Software tools will be able to search, find, select and merge these analysis, and to visualize them in a friendly language. Picking various pieces from the web, it will be possible to hear an MP3 recording of a Bach's fugue, looking at a running textual/graphical explanation of its formal structure, its harmonic progressions, the quoted comment for bars 17-20 from a Glenn Gould's review, mixed with my personal comments, and so on.
If a Bach's fugue can be described to such a deep and broad level, the same should obviously be possible even for a musical composition created by a computer program.
With an important difference: nobody will probably be interested in spending time on such a work. The natural question will be: why I should describe a work that is already completely described by the algorithm that generated it ?
And in fact, the answer should be: a possible description of that work is exactly the input of the program that generated it.
If such a musical composition can be described through a language, then it should be possible to create it using the same language.
Let's imagine a 'performer/interpreter/composer' program: it takes in input a 'more or less' detailed description of its execution, and completes the lacking information with its own creativity. It reacts like a jazzman: if the theme is written note by note he will simply play it, if the melody is just drafted he will add some personal lip, if only the chords are given he will create a melody, if he reads only informations about the structure and the mood, he will create everything he feels coherent.
Giving it a complete description of a Bach's fugue, it should play it note by note, following the instructions about dynamic, portamento, etc. Possible information about structural form, harmony, mood are not so important to the mere execution: the performer's needs are at the 'leaf' level of the description tree.
But if we give it just a trace, only made of the melody metric, some key note and the harmonic structure, we leave him the freedom to complete 'randomly' its interpretation.
If we - eventually - leave him (it?) just the structural infos and the changes of mood, him will be required to 'compose' a fugue, following some composition rule.
Let's move to the extreme: we ask him ... "Please, play me something sad". An experienced jazzman will recall to its mind everything he played and heared that he feld as 'sad', and will play on that examples. Our 'experienced' program will search (on the Internet, obviously), find and merge together all the descriptions somebody (possibly 'trusted') wrote about somethig 'sad', and will play.
Even more: "Please, SING me something sad".
There should be, intuitively, something in common between a sad melody and its lyrics: hence the description of the lyrics should express it. On a side, there are structural information for a poem that allow us to couple it with a melody: these will be the metric, the form (e.g. AABA), the presence of vowels, or other sounds, etc. On another side there are information about possible interpretations, like rethoric figures, moods, e.g.: antithesis, climax, exclamation, sadly, ironic, etc - or more simply: crescendo, rallentando, ....
Note that while some information are intrinsically musical, other are more abstract, or taken from other domains, like psychology. However all these, together, can be used as a guide for both the music and the lyric.
If we believe all this is possible, then is not difficult to apply the same method to more complex domains. Following the path indicated by the couple music + words, we can continue extending the similarities to a theatre plot build on the same song, then to its coreography, its ligth changes, a movie built on the plot ...
While defining terms of a dictionary, it is important to keep in mind the two complementary directions: the analytical-descriptive, and the syntetical-generative. When an existent opera is described by, and for, philologists, such a description should also be re- usable as input to a generative program.
Existent studies on this idea are known. Among others, TEAnO already explored the inner structure of artistic compositions in some work (, ). The present work suggests and supports the use of common, standard representation for their *exposed* structure.
The point is: we need a huge experience to create something. Such an experience must be collected and described - by human beings - in a common, standard language, in order to achieve a critical mass.
While this collection o description will be an unvaluable device for a better understanding of the human creation, it will be possible to give it in input to software tools, toward an automatic creation of new operas.
It is obviously impossible for a single author of descriptions to be exhaustive about a subject, even if limited.
The only method to collect a critical mass of information is to allow a (controlled) cooperation. Experts in different topics can then merge their points of view: opinions can differ, still being both valuable. Moreover, where an expert can be informative to a deep and authoritative level, a common person, even uninformed, can add a simple, spontaneous, but important detail.
One of the key methods to achieve this result is the ability to deal with atomic information, handled indipendently.
A little example: if I remember the birth date of the author of the author of a book, but not his name, I should be allowed to publish the incomplete information, like here (in a semi-invented syntax):
<rdf:Description rdf:about = "uri.bibliotecadebabel.org/Borges/Ficciones" >
xxx:birthDate = "1899" />
Another statement, coming from another side of the web, could be:
<rdf:Description rdf:about = "uri.bibliotecadebabel.org/Borges/Ficciones" >
xxx:Surname = "Borges"
xxx:Name = "Jorge Luis" />
The two will be merged together by the parser/presentation tool, composing a more complete information.
Another important tool is the availability of many, small, dictionaries covering specific topics. So, in this example, the "dc:" dictionary covers common bibliographic attributes, while "xxx:" deals with common registration data for a person.
This atomicity helps in mantainance and versioning of both dictionaries and documents, and makes easier the diffusion and acceptance among experts in such topics.
It is also very important to achieve a possible reuse of the same dictionary in different discipline. So, for example, the basic - language level - dictionaries, like "xml:", "rdf:", have general use, just because they do not touch specific domains. The now widely accepted "dc:" dictionary is made of just 15 properties.
On the contrary, a huge dictionary trying to cover an encyclopedic knowledge will be hard to learn, mantain, upgrade, and will not highligth the affinity between terms (e.g. BirthDate, BirthPlace).
Staying on musical examples, we can note that a musical composition has, obviously, a temporal dimention, on which musical events happen. The same temporal dimention, however, is shared with possible lyrics, possible dance coreography, changes in scenography. It is the same 'time' axis of a theatre or a movie plot. This tell us that the existance of a 'temporal dimension' dictionary, dealing only with timing details but neither with musical events nor poetic metric nor whatever will be of great help, because it will provide a basic language to both music, theatre, dance, etc averting the risk of duplicated terms.
Again: if we want to describe a psycological state, let's say 'sad' suggested by a musical passage, we will want to take it from a ready dictionary of 'psicological states'. And we will probably use by preference a dictionary written by a commission of psycologists, not by musicians.
This 'knowledge partitioning' goal, however, will not be so easy to achieve: too many parallel efforts will compete, and they will generate obvious redundancies and incompatibilities. So our tools will have to support us against equivalence of terms, ambiguities, and so on. This will lead to the definition of dictionaries allowings statements like:
"xxx:birthPlace" -> "eq:hasSameMeaningOf" -> "yyy:placeOfBirth"
The same problem affect also the identity of the subjects. The possibility to define URIs that uniquely identify *anything* is a widely accepted idea. Obvioulsy the same *thing* will be identified with more that URI, more or less authoritative (e.g.
"http://www.familiaborges.ar/uris/JorgeLuisBorges" and "http://uri.uriinternational.org/VIPs/BorgesJorgeLuis" ).
Dictionaries supporting the declaration of such equivalences will be needed, hence they will be developed.
Also, we will need tools to equate information to different level of detail. E.g. something helping us to catch the equivalence between statements like:
<rdf:Description about = "uri.bibliotecadebabel.org/Borges/Ficciones">
<dc:Creator rdf:Resource="www.familiaborges.ar/uri/JorgeLuisBorges />
<rdf:Description rdf:about = "uri.bibliotecadebabel.org/Borges/Ficciones">
<dc:creator rdf:Resource="#jlb" />
<rdf:Description rdf:ID = "#jlb"
xxx:Name = "Jorge Luis"
xxx:Surname = "Borges"
xxx:BirthDate = "1899"
xxx:BirthPlace = "Buenos Aires"
Note that the choice of a common syntax is not the point of maximum importance. The real pivot is the accordance about the data model behind such a syntax. The advent of XML introduced a basis for the definition of several languages very similar in terms of lexical rules and overall structure. Some proposed standard, like RDF, try now to define a common basis for the underling model.
Moving the focus on 'standard syntax' and 'standard dictionaries', instead of something like 'knowledge base' and 'central database' leads to an important path of evolution of the web, hence of knowledge availability: everybody, not just knowledge base administrators, will be able to contribute to the whole adding their little help.
Every document, written following an accepted formal language, referring to widely known dictionaries and URIs and correctly exposed to the Internet becomes automatically part of the general knowledge. Search engines and processors will do the rest.
Working either for yourself or for the general knowledge, for the community, will coincide, the only difference laying in the visibility (local computer vs web) of the information you write down.
Signing every information and supporting the rules of the emerging 'web of trust' will help readers to discriminate between 'trusted' and 'distrusted' fonts. Also, it will give the authors an acknowledgment for their work.
Everybody working on artistic operas, either on the analysis or the creation side, should keep alive his attention on the evolution of these standards. Every specialist should imagine himself working to the definition of topic's specific dictionaries. Artists and analysts should start to desire a common language, and should start to meet together in order to define it.
Computers community is working heavily on these matters. But the specific knowledge of a musicologist or an architect cannot be merely substituted by a tool: they should instead move their language toward a standardization, supported in this work by I.T. experts.
Technicians can support the evolution of the artistic communication developing tools for analysis, presentation, execution of artefacts, based on specific domain's dictionaries. Even automatic creation of artistic operas will be greatly simplified and formalized by this effort.
TEAnO, among other organizations, works since years in this direction, developing tools and methods for the creation and the analysis of artefacts, always working in close cooperation with artists.
The final race: formal languages against natural languages.
We cannot obviously renounce neither to fine prose, poetry, expression, invention nor to national languages, dialects, accents, jargons: these are distintive qualities of the human communications.
To a human listener, the explanation in natural language of a concept carries (at least) a twofold information: the concept itself, and the qualities of the person that exposes it. While we are too often forced to use both these information if we want to understand the concept, sometime this is still not enough. Even talking in same language, with same accent, having studied in the same college we are often required - to understand - to digest information from the voice's color, the face's expression, the pauses, etc. Often a missed comma changes the meaning of the sentence. Often we must read again the same sentence, or must ask the speaker to repeat using other words. There are cases in which the only way to make clear an idea is just stop to talk, and draft down a picture.
In this article we suggest that would be fine, useful or - simply - necessary, to be able to separate the information we want to communicate from the way we communicate it.
Plenty of philosophical concepts - possibly simple or even obvious - are still hidden to the mass just because of the way they are exposed; on the contrary a lot of beliefs are widely accepted despite their absurdity because their explanation made them simple, easy to understand.
We are not here to support that a formal language like RDF will became 'popular'. We simply note that a sentence like "a + a = 2a" is simpler and less ambiguous that "adding a unitary quantity to another having same weight will certainly bring to a quantity double of the original ones".
When a concept can be expressed in a formal language, it should be. Then we are free to translate it in natural language or to browse it with a presentation tool.
The goal of this article is not to demonstrate the pertinence of RDF to a specific problem. Instead, we are interested in supporting the applicability of RDF - or similar metadata languages - to *any* problem, expecially where there already exists a domain knowledge.
So we choosed a problem that was already studied by several point of view, and by means of different tecniques: the Propp's analysis of russian traditional tales. Beyond the Propp's work, mainly devoted to analisys, we know about some attempt to transform his observations in a set of rules for the generation of new tales . More than one algorithm was developed to reach such a goal, but they are now hidden behind the opaque glass made of a (sometime obsolete) programming language and an internal data structure.
We try here to re-expose a possible structure for this problem, using RDF both for expressing the basic concepts, the model, the input and the output. This give us the hope to leave, as inheritance, a set of readable information, incrementable and useful for future programs, mans and, why not, those old - if re-engineered - programs that already was able to generate tales.
Following Propp, a typical tale can be split in sections, each fullfilling a specific Function. The list of possible Functions is given. Each section can be reduced to a single action carried out by certain interpreters. For each Function it is possible to configure a set of possible Actions and possible interpreters, respecting some basic rule. In previous works, it has been verified that the rules proposed by Propp are not enough for the generation of the tale, so they have been enriched with additional constraints.
In our attempt, the domain has been described with a dictionary (whose namespace will be locally called "pp:") in which we define both the basic concepts and the rules.
The basic concepts are:
the tale, as a whole
e.g. FirstDonorFunction, HeroReaction, Figth, Rescue
e.g. Antagonist, Victim, Hero
e.g. Ivan, BabaJaga, Princess, King
e.g. ToFly, ToKidnap, ToKill
a section of the tale, i.e. the application of an Action
The rules are expressed through properties of the basic concepts, like:
· the pp:Function 'AllowReaction' can be interpreted by a pp:subject having the pp:possibleSubjectRole = Hero, or FalseHero
· the pp:Character 'Dragon' has pp:possibleSubjectRole = Antagonist
· when it is an Antagonist, the Dragon has pp:possibleAction = ToKidnap, ToKill, ToDevour, ToFly
· a pp:Action can be applied when it is pp:impliedBy the presence in the tale of a pp:Transition having certain properties
and so on..
A version of the dictionary is reported, in appendix A.
Then, a certain number of existing tales, taken from tradition, can be analyzed (described) using this dictionary. This will lead on a side to a verification of the Propp's rules, on the other to a possible further enrichment of such a rule-set, based - for example - on statistical observation.
An example of a similar description is reported in appendix B.
The next step is the generation of a new description of a tale starting from the same dictionary. The input for the generation will be a simple statement: <pp:Tale rdf:ID="TheTale"/>.
A little example of tale generation is reported in appendix C.
There are some important aspects we want to underline here:
· we are not really going to generate a tale, but just its description. The tale we describe is assumed existing, due exactly to the fact that we are describing it.
· we didn't wrote a line of code in order to obtain the graphical representation of the tale, but we used instead a standard tool, availabe over the Internet. This is only one of possible result we can reach simply because we are speaking in a common language.
· for the sake of simplicity we kept together both original Propp's rules and concepts and subsequent refinements. It should be preferrable - from a theoretic point of view - to separate the Propp's view from subsequent refinements, and to separate basic concepts from rules. This would allow the generation of tales either with only the basic rules, or adding the refinements.
· everybody can refine our dictionary just writing some statement in a different document, and making it accessible to the tale generator.
While this is only a little example written to show the applicability of RDF, there are several suggestions coming from it, inducing possible future developments.
In previous works, the generation algorithm worked 'level-by-level', generating in cycle subsequent refinements, and checking at each step against the given rules in order to accept or discard the intermediate results. Let's call this tecnique 'breath first'.
While this approach is still applicable, the model proposed here allows also a different tecnique, let's call it 'depth first': each pp:Function is expanded in order, till the maximum possible level of detail, then the following function is choosen and expanded, constrained also by the already generated pp:Transition(s), and so on. This allows, for example, the generation of a 'pp:Answer' transition after a 'pp:Ask' one. Also, it helps to avoid some possible deadlock, when for example the interpreters, if choosen at the beginning of the in the generation process, are discovered not able to personify the Functions needed to complete the story.
This two tecniques, as the proposed names suggests, can be mixed in a graph-search algorithm, well suited for optimization problems and widely tested since years. Each node in the search tree represents a modification of the parent RDF document, and the final solution can be choosen weigthing various alternative paths.
Two new opportunities comes naturally.
The input model can now be a partial tree: not only the initial <pp:Tales rdf:ID="TheTales"/> will given, but also some other constraints, like a final pp:Transition having pp:Function = pp:Reward, with pp:action = pp:ToMarry and agents = pp:Ivan and pp:Princess. The generative process can now apply only pp:Actions not contradictory with the preconditions (expressed as usual in the model) of the already instantiated (future) Transitions.
The second idea is that if graph-search data structure are so well known, probably we can re-apply some existent algorithms, if they were able to read a RDF model.
Imagine a tale where the Characters are King, Queen, Bishop, Horse, Tower and Pawn, each one having specific pp:possibleAction(s). And now, by converse, imagine a russian tales in which Ivan the black and Ivan the white must kill the opposite King, applying their specific abilities (as defined by Propp), playing a pp:Transition in turn, guided by a computer chess engine.
Well, we cannot, obviously, squeeze in a paragraph the description of a project like this: it is just a hint to wonder about ...
As we saw, in this proposed version the generative program can choose a pp:Action and its agents basing on some given constraints related on previously created pp:Transition. This will not, in general, be enough. For example, if Ivan receives the FlyingHorse he can ToFly; but if the Dragon kills the Horse in the meantime, he can't. So we need some means to express fixed 'states' during the story, not only the sequence of actions. The answer is to introduce a pp:State concept. As you imagine, this will rapidly lead us to a model very similar to others, well known and widely used in the rapresentation of information systems, like for example UML , or Petri Nets.
So we would be tempted to define an RDF model for, let's say, UML. The point, here, is that while today we are dealing with the Propp's models, other people in the I.T. communtity is heavily working on UML models, Petri Nets, and other general purpose models. So, we will not be required to re-invent them, but just to adhere to the developments in that fields.
Even in this simple example proposed, some concept like pp:Action, pp:subject, pp:impliedBy etc. are so general that the Propp's model is certainly not the only dictionary in which they will be defined, and surely not the most general.
RDF itself "is a language influenced by ideas from knowledge representation, e.g. semantic nets, frames, ad predicate logic".
We are not going to re-think the world, but just to rewrite it.
The answer, again, lays in recognizing that software beings may have their own language, and in letting them to communicate with their fellows.
Why Propp has been so widely and deeply studied ? An answer could be: because it proposes a simplified view of the reality, not an artificial semplification but one that is authored by the popular culture. So it is a way to analyze a complete reality, a sort of biosphere, without having to deal with all the complexity of the real world.
Describing a tale requires the same basic concepts as describing any sequence of action, and these can be taken from any sort of fictious or real world: a theatre plot, a movie, a real story, the sequence of operation in a factory process, a recipe to cook a dish, the (hypotetical) description of a procedure that led to a program crash, an office workflow, ...
Every time we are composing a sequence of action, we are telling a tale.
A 'customer care' expert system asking "did you press escape on the Option dialog window?" is composing a tale. A CAM program preparing the instruction for a 'numerical control' mill machine is composing a tale. Should we continue ?
If there is a common ground in all these 'stories', and hence in all these possible 'story teller' programs, we have to find it. Human beings recognize hidden meanings in their talking by means of the deep common origin of some key word, by means of not declared but always present etimologies. Software beings can do the same, and even more: we can build for them a language based on a common, slowly growing, set of concepts, expressing both meainings and internal relations.
This language works by 'descriptions'. It never jumps into the 'natural language' expressions. Limiting our language in this way will allow possible and continuous refinements: we should never consider the description 'complete'.
At any time a program will be able to convert it a in a human friendly language: the only limit in expressiveness will be the level of detail or our description. From there, a further refinement will be very difficult: it will be just an output, for human consumption.
Staying on Propp's domain, an objection by Levi-Strauss mantains that in a given Function, the same Action applied by a King or by a Sheperd assumes a deeply different meaning, given by specific properties of the two Characters (culture, richness, ...). So, while the Propp's description of the world is insufficient, we can refine it with such a properties. Since that moment, even the translation in a human language will be supported by more information and will be more rich in expression.
Apart of mere translation in human language, we should note that there are different ways to tell a story: from the beginning to the end, from the end, recalling preceding facts, jumping forward and backward, ... these are all scenographic tecniques well known by writers and directors.
Obviously - did you dubt? - also this scenographic activitiy will be expressed in RDF. We are, in a sense, telling the story of a writers that tells a story: he start telling the third scene, then tells the first, then the forth recalling the third, ... The story of the writer is in foreground, the written story in background.
We can found similar 'meta-tales' also in the description of the creation activity itself.
We can describe a symphony by several points of view: formal, harmonic, psicologic: this will give us an idea of the music, and probably will help a generator tool in its work; as everybody can feel, also a musical opera is, in a sense, a tale.
But we can also describe the 'tale' of the composer writing its opera. Maybe it has a great idea for a melody, but this has a 'response' character; so he starts composing the (let's say) 'B' section of the theme, then adds a coherent 'A' part. Often the prelude is a sort of 'thematic index' of the following movements, so it is not strange to write it at last.
Sw beings need a language to communicate their interior thoughts, without having to translate them in a clumsy 'natural' language.
Human beings need a formal language to communicate with sw beings, and between themself: a language able to jump over dialects and national differences, toward a minimization of semantic ambiguities.
The generative art offers us a motive: both sw and human beings need a means to express their creations in a harmonized way, in order to reciprocally interpret and study the other's opera.
We maintain that the interior structure of thougth must be freed, let to emerge, avoiding unnecessary translations. This can be achieved following the emerging technologies in sw to sw communication, presently led by XML and its applications. The concept of metadata and its representation in XML furtherly enlightens this path.
 Simple HTML Ontology Extensions, Project's
 Encoding Dublin Core Metadata in HTML, IETF - RFC2731
 Extensible Markup Language (XML) 1.0; http://www.w3.org/TR/REC-xml
 Dublin Core Metadata Element Set; http://purl.org/dc
 Uniform Resource Identifiers (URI): Generic Syntax; Berners-Lee, Fielding, Masinter, Internet Draft Standard August, 1998; RFC2396.
 A program telling folktales - M.Maiocchi - University of Milan, Etnoteam spa, TEAnO
 Verso una teoria algebrica della telenovela - P. Ferrara - Conference "Attenzione al potenziale", Firenze, 1991
 Unufied Modeling Language; see http://www.omg.org/
 C. A. Petri, "General Net Theory", Proc. Joint IBM & Newcastle upon Tyne Seminar on Computer System Design, 1976.