\documentclass{article}
\usepackage{bera}
\usepackage[top=1.0in, bottom=1.0in, left=2.0in, right=1.0in]{geometry}
\usepackage{setspace}
\usepackage{listings}
\usepackage{graphicx}
\usepackage{url}
\usepackage{datetime}
\newdateformat{honorsthesisdate}{\monthname[\THEMONTH], \THEYEAR}
\honorsthesisdate
\begin{document}
\setcounter{secnumdepth}{4}
\setcounter{tocdepth}{4}
\title{Explorations in Algorithmic Composition: Systems of Composition and Examination of Several Original Works}
\author{Jacob M. Peck\\Candidate for Bachelor's of Science Degree in Computer Science\\\space\\State University of New York, College at Oswego\\College Honors Program}
\date{\today}
\maketitle
\thispagestyle{empty}
\pagebreak
\section*{\centering Abstract}
\setcounter{page}{1}
\pagenumbering{roman}
This paper examines the concepts and history of algorithmic composition, as well as looks at several example programs written in an attempt to further engage with the material at hand. An overview of the important events in the development of algorithmic composition is given, alongside discussions of several topics important to the understanding of algorithmic composition. A collection of custom programs is presented which form the body of my hands-on experience with algorithmic composition, along with several example pieces from each. Areas for future study are identified, and several complications and weaknesses encountered in the research process are addressed.
\pagebreak
\section*{\centering Table of Contents}
\tableofcontents
\pagebreak
\section*{\centering Advice to Future Honors Thesis Students}
\addcontentsline{toc}{section}{Advice to Future Honors Thesis Students}
\setcounter{page}{1}
\pagenumbering{arabic}
\begin{doublespace}
Writing this thesis was the furthest thing from easy that I could have imagined. However, and this may seem completely contradictory, this is also the smoothest example of writing I have ever put to paper thus far. I do believe that several things contributed to that, and it is my hope that if and when it comes time for you to write a thesis, that the following tips help you.
First and foremost, try your very hardest to complete all assignments in your section of HON 350 on time and before the semester is over. The organizational schedule they offer is a serious boon to thesis writers, and any student that is able to commit themselves to starting off on this path will be leaps and bounds ahead of their peers. This is a tall order, to be sure, as only three students (I think) in my section of HON 350 managed to complete all of the required work, out of a class of at least twenty. But hard work and determination pay off---believe me.
Second, make sure your topic thoroughly interests you. You're going to be spending the next year and a half to two years researching every last nook and cranny of the topic, finding out what makes it tick, how to make it tick in a slightly different way, how to break it, why you might want to break it, how to build it back up again, etc. If you choose a topic that you're simply not interested in, you \textit{will} grow bored of it in the process, and either end up abandoning your thesis (only to start fresh, oh no!) or trudging through the pain and producing a sub-par work that neither you nor your advisors can be proud of. Pick a topic that is close to your heart, and better yet, don't be afraid to break new ground. If you want to research the effects of global warming on declining seal populations and how this relates to the blubber industry, but are afraid because there is a lack of sources, don't be afraid to draw your own conclusions! You are a scientist, a scholar, an academic, and your voice is important and should be heard! You will thank yourself, and your advisors will thank you for providing them something interesting to think about in the mean time.
Speaking of advisors, I firmly believe that a prudent choice of advisors is key to the production and motivation behind moving your thesis along. My two advisors, Professor Craig Graci and Dr. James Early, challenged me every time we met to push a little further, to seek a bit deeper, and to think, think, think! about what I was doing with regards to my research. And for this, I cannot be more grateful, as I am truly proud of what they helped me accomplish. They provided goals and sub-goals, and sometimes even sub-sub-goals, and in general were extremely supportive of me throughout the entire process. And as a side-effect, I like to think I've forged a great friendship with them, and I am extremely appreciative about that as well.
The best advice I can offer, however, is this: don't give up. The thesis may seem like it is a far way away, or an impossible monumental task, but believe me, it can be completed. Not only this, but you will find yourself better off for the experience. The process is grueling, to be sure, but it is even more rewarding in the end.
\end{doublespace}
\pagebreak
\section*{\centering Acknowledgments}
\addcontentsline{toc}{section}{Acknowledgments}
\begin{doublespace}
I would like to thank several people who contributed to this paper, either directly or indirectly.
Firstly, I would like to thank my mother for all she has done to support me in my life, academics and otherwise. Without her, none of this would have been possible.
I would like to thank my wonderful Honors thesis advisors, Professor Craig Graci and Dr. James P. Early, without whom, this endeavor would have surely fallen apart. The support I have received from these two in my research and my academic career has been absolutely amazing, and I cannot imagine any better advisors or mentors to have.
I would like to thank my friends and co-workers, who allowed me to bounce ideas off of them whenever I started rambling (sorry, guys!). In particular, I would like to thank Missy Hill and Ian Mumpton, both of whom contributed greatly to my creative process and for having an uncanny knack for finding bugs even if they're not technologically inclined. I would also like to thank Chrissy Brandon, Jess Tetro, Jenn Cartossa, Katie Lachut, Adam Sternberg, and Janel Sullivan for keeping my music alive. It truly means a lot to me, guys.
Lastly, I would like to dedicate this thesis in the honor of my father, Robert Charles Peck, and my grandmother, Linna Margaret Wilcox, both of whom passed away during the formative years of my life. My father, who passed away while I was writing this, and my grandmother both influenced me in ways the world may never know, and I have full intentions of making them proud. May you two forever play checkers. This one's for you!
\end{doublespace}
\pagebreak
\section*{\centering Author's Reflections}
\addcontentsline{toc}{section}{Author's Reflections}
\begin{doublespace}
\subsection*{By motives opaque}
My journey into algorithmic composition research was a rather natural progression due to various events in my life, combined with my natural interests. I have always been interested in music, as far as I can remember. This may be partly due to my childhood fascination with anything patterned and quantifiable, but I suspect it is a bit more primal and elementary than that. Music moves me, plain and simple.
Aside from being just a listener of music, I play several instruments as a hobby, but I wouldn't say that I am particularly good with any of them, nor am I trained at all. I just pick them up and play, and record when the mood hits. However, I do believe that this ability to comprehend the intricacies of several different instruments further increases my understanding and appreciation of the musical space that each instrument occupies.
Though my favorite collection of genres of music lies squarely in the metal spectrum, I find almost any music worth listening to and understanding, and I have for quite some time. As a result, my personal music library (great times that we live in, that each individual may have a music library that belongs solely to them) contains several thousand works by a little over a thousand different artists in such diverse genres as folk, death metal, neo-classical, baroque, anti-folk, post-punk, soft rock, and even a few albums completely composed of ambient sound recordings. Music may be found anywhere. But I digress. Suffice it to say that music is my fuel.
\subsection*{The flame lights Prometheus}
The first event that lead me down the path of becoming a computer scientist was when I around 7 or 8, and my father's friend moved in with him for a brief period. This man was a computer fanatic. Whether or not he possessed the skills he claimed he did is another story altogether, but his discussions with me fascinated me. He was a genuine intellectual, or as much as you can be and still talk appreciatively to a 7-year-old. He never excluded me from conversation simply because I was too young to understand. No, he simply explained things in terms that I could understand and moved on to the next subject. Looking back on it, it was these conversations that helped to shape my future.
Moving beyond that, around age 15 or so I was a part of an engineering program hosted by Lockheed Martin, offering a free programming course in C++. I was hooked. Over the course of ten weeks, the class touched on the basics of programming, and led up to building a Yahtzee clone. Though it was simplistic work, it cemented my desire to continue programming.
While at SUNY Oswego, I opted to take a course titled ``Cognitive Musicology'' taught by Professor Graci. It was this course that eventually led to my research on algorithmic composition, due to the \verb+MxM+ interface and \verb+Clay+ programming language that Professor Graci uses to teach this course. It opened my eyes to the abilities of computers as musical machines, and allowed me to explore my ideas in a way that I was unable to previously.
\subsection*{Atlas takes a break}
Well, that brings us to why I began this research, but it says little of what I learned and how I grew through the process.
In researching the topic and implementing various approaches to algorithmic composition, I have gained a substantial amount of mastery over certain concepts, in particular cellular automaton theory and Lindenmayer systems. These topics, though large and varied on their own, are much easier to digest when concrete examples generate themselves in the form of music. Whether or not the results were the best they could be did not matter. What I gained from the process is undeniably much more than the sum of the output generated by my programs. I gained \verb+insight+, and that is something that truly has to be experienced to be understood.
Reading Nierhaus' book (see \cite{nierhaus} in the References section for more information) truly opened up my eyes to the incredible things people had done within the spaces where music and computing intersect. First let me say a small word about Nierhaus' work: this is the single most comprehensive tome on any single subject that I have ever encountered, and yet it is still a casual reader, for those who are interested. As for the people detailed in the chapters of Nierhaus' history of algorithmic composition, every story and every personality is interesting. From Hiller and Issacson to Cope to Xenakis to Miranda, it seems that any name attached to algorithmic composition has a unique approach to offer and a personal viewpoint that changes your perception of algorithmic composition as a whole. I firmly believe that it is the exploratory and experimental nature of these people that has influenced me to keep working on new ideas, even if they don't fill an immediate need.
\subsection*{The scribe uses Babbage's Difference Engine}
The technology I've worked with in crafting this thesis was an incredible experiment in and of itself.
For the Java programs (\verb+Wordsongs+, \verb+HexMusicExperiments+, \verb+Markov Machine+, \verb+LCompose+, and \verb+ALM+'s \verb+JFugueDaemon+), working with JFugue and Gervill was a huge boon. Though Gervill isn't represented in this thesis, it did allow me to create some interesting renditions of the examples provided within the text by substituting in different soundbanks. JFugue, on the other hand, was central in creating the programs, as it allowed an incredibly simple, intuitive way of crafting musical building blocks with Java. Native support for working with the MIDI standard was nice as well. Learning a bit about MIDI files goes a long way, as evidenced by the chaotic and ugly scores of the \verb+Wordsongs+ examples, where I paid no attention to proper formatting, resulting in pieces that are unreliably translated. Compare those scores with the scores from, say, the \verb+Markov Machine+, and you'll see a marked increase in the readability of these scores, as throughout the course of my experiments with JFugue, I became more aware of how to properly format a MIDI file.
\verb+ALM+ is a different beast altogether, being programmed in Common Lisp. \verb+ALM+ also has the distinction of being the largest project I've written, as well as the first modularized project I have undertaken. The flexibility and fluid nature of Lisp allowed me to do things I would have never been able to do with the rigid structure of strongly-typed languages like Java, while at the same time allowing me to express my thoughts in a concise, concrete manner. Lisp is by far my favorite language at the moment, and I oftentimes find myself using it to perform some simple calculation, instead of reaching for my calculator.
With this document, I have become intimately acquainted with \LaTeX, a typesetting language that I highly recommend anytime someone asks me how to make a document look professional. \LaTeX\ has provided me with nothing but a pleasure using it, and it takes care of all the nitty-gritty details for you, allowing you to get right into writing your document. Other word processing packages oftentimes have an interface that isn't all that intuitive, and gets in the way of writing. I like my word processing like I like my code editing, thank you.
Typesetting the music was an interesting aspect of this process as well. I settled on sending the MIDI files generated by JFugue through LilyPond, a music typesetting package. LilyPond generated .png (Portable Net Graphics format) images of the scores, which I then imported into this document. Though a bit indirect, it works without a hitch, and provides fairly decent results. There are music typesetting packages for \LaTeX, but I could not manage to get them working in my setup. LilyPad worked out of the box, so there is no reason not to use it.
\subsection*{for(;;)\{learnAndGrow();\}}
This thesis has been a tremendously rewarding experience for me. If it is never read by anyone beyond myself and my advisors, I would not mind, because the writing and research put into this has shaped me into a better writer, and a better scholar. I feel honored to have worked on this, simply because I was able to pour myself into it and receive so much in return. But this thesis is far from the end of my academic and personal growth. I know that in the grand scheme of things, I will have to constantly learn and relearn things in my life, and I couldn't be more excited for it.
Thanks for reading, and I sincerely hope you enjoy what follows.\\
\textit{$\longrightarrow$ Jake, September 2011}
\end{doublespace}
\pagebreak
\section*{\centering Thesis Body}
\addcontentsline{toc}{section}{Thesis Body}
% section 1 - introduction
\section{Introduction}
\subsection{What is algorithmic composition?}
\begin{doublespace}
The term ``algorithmic composition'' encompasses many different ideas. Gerhard Nierhaus has defined algorithmic composition as ``composing by means of formalizable methods,'' a definition which is broad enough to cover all of what could be considered algorithmic composition, and yet narrow enough to exclude more traditional methods of composition \cite{nierhaus}. Another, more modern way of defining algorithmic composition is as computer and math music. Though this definition provides a basic idea of what algorithmic composition consists of, it fails to capture the full extent of what algorithmic composition is, as neither computers nor mathematics are required for the practice of composing a piece algorithmically. What algorithmic composition truly consists of, then, is wrapped within the term itself---composing music with algorithmic processes. This still leaves much to be desired, however.
There are many camps of algorithmic compositional thought, each with their own approaches to the concept, and these may be divided into two major schools. The first major school of thought in the practice of algorithmic composition is that of style replication. The style replication approach is exactly what its name entails---replicating style through the use of algorithmic processes. The style replicationist approaches to algorithmic composition tend to use heavily mathematical methods to perform statistical analysis upon a corpus of original works, in order to generate (with or without human interaction) a new piece of music in the perceived style of the corpus. Several popular and successful projects have emerged out of this line of thought, including David Cope's \verb+EMI+, short for ``Experiments in Musical Intelligence.'' \verb+EMI+, given the proper corpus, has been able to compose new pieces in the style of Bach, Joplin, and even Balinese gamelan music \cite{cope}. However, despite its success, the style replicationist approach to algorithmic composition poses its own limitations. One common criticism of style replication is that the systems in use to produce these new pieces are inflexible and allow for little human interaction \cite{valle}. Likewise, the very nature of style replication allows little room for new and creative inspiration to take place, due to the statistics-heavy approaches these systems generally take.
The other major school of algorithmic composition is the school of original works. This school focuses on using algorithmic processes to compose original pieces of music in a style of the composer's choosing. These approaches are typically less statistic-heavy and focus instead on original ideas held by the composer to create a unique piece of music. From these efforts, many different approaches are taken in creating an interesting piece of music, that may not necessarily be aesthetically pleasing to listeners of tonal music but provide insights into how music works on a fundamental level. Several composers have made careers out of ideas from this camp of algorithmic composition, including architect-turned-composer Iannis Xenakis \cite{edwards,cope2}.
\end{doublespace}
\subsection{A brief history of algorithmic composition}
\begin{doublespace}
Algorithmic composition has an interesting history, spanning centuries, cultures, and ideas. The first recorded example of algorithmic composition comes from around 1000 A.D., with Guido d'Arezzo's automated method for creating chorale pieces out of bodies of text \cite{nierhaus,edwards}. Essentially, d'Arezzo assigned vowels to pitch classes within a key, and simply translated the vowels within the text into a chorus line to accompany the text \cite{nierhaus}. This system was designed primarily for the use of churches to create hymns easily \cite{nierhaus}.
From these humble beginnings, an entire area of study arose. In 1650, Athanasiuys Kircher's work ``Musurgia Universalis'' contained the outline for an algorithmic composition system titled ``Arca Musarithmica'' \cite{nierhaus}. This system utilized a number of engraved wooden sticks called \textit{syntagmas} \cite{nierhaus}. Through manipulation of these \textit{syntagmas}, composition of material in five different musical styles was possible \cite{nierhaus}. Again in 1661, Kircher created a device for a child called the ``mathematical organ'' which among myriad other uses, was able to compose rudimentary music based on the principles of what would later become set theory and type theory \cite{nierhaus}.
With the eighteenth century came the rise of musical dice games. Starting with Johann Phillipp Kirnberger's ``Der allezeit fertige Menuetten- und Polonaisencomponist'' (``The ever-ready minuet and polonaise composer'') in 1757, these games typically revolved around rolling a pair of six-sided dice to determine which bar to select from a table of pre-written measures \cite{nierhaus}. By 1812, at least twenty different examples of these musical dice games were in existence, from many different composers, including C. P. E. Bach, Maximilian Stadler, and even Haydn and Mozart, though the latter two cases are unconfirmed \cite{nierhaus}.
The development of the computer created a great tool for algorithmic composition, and an explosion of efforts occurred in the following years. In 1955 and 1956, Lejaren Hiller and Leonard Isaacson created the first fully computer-composed work of music on the ILLIAC computer that the University of Illinois owned \cite{nierhaus}. Titled the ``Illiac Suite,'' this work was divided into four distinct parts, each using a different method of composition, culminating with the fourth section which deals with Markov analysis \cite{hiller}. In 1963, Hiller and Robert Baker created the first interactive environment for computer-assisted music composition, titled \verb+Musicomp+ \cite {nierhaus}. In 1970, Richard F. Moore's \verb+Groove+ allowed a way for a computer to deal with time signatures and other musical time measurements \cite{nierhaus}. In the 1980s came \verb+Midi Lisp+ (an extension of the \verb+Lisp+ programming language to deal with MIDI manipulation), \verb+Patch Work+ (a graphical interface lying over top of \verb+Lisp+ designed for easing composition efforts), \verb+Csound+ (a \verb+C+-derived music programming language), and \verb+Bol Processor+ (a system for real-time composition and improvisation) \cite{nierhaus,edwards}. The 1990s had a fair amount of algorithmic composition efforts as well, including \verb+Open Music+ (a \verb+Lisp+-like visual programming language), \verb+Common Music+ (an extension of \verb+Common Lisp+), and \verb+Symbolic Composer+ (a music programming language) \cite{nierhaus,roads}. More modern approaches include \verb+PureData+, \verb+SuperCollider+, \verb+Nyquist+, and \verb+Max/MSP+, all of which are music programming languages \cite{nierhaus}.
The above history of algorithmic composition is simply a sketch of the most salient events in the development of the area. It is very important to note that all of these systems built upon the ones that came before them, and draw from vastly differing fields such as mathematics, logic, physics, and even economics. These approaches represent a moving tide of ideas that bring algorithmic composition to the position it occupies today. In the following sections, modern approaches to algorithmic composition will be addressed, particularly with respects to four different models of algorithmic thought---cellular automata, Markov chains, Lindenmayer systems, and genetic algorithms---in order to provide a foundation for the work to be presented later on.
\end{doublespace}
\subsection{Cellular automata}
\subsubsection{What are cellular automata?}
\begin{doublespace}
Cellular automata are a subset of a field known as artificial life, which is itself a part of artificial intelligence. The goal of studying artificial life is to attempt to find solutions to problems that arise out of the interactions of discrete agents within systems made of relatively simple rules. These agents are often driven towards survival and away from death, and the ways in which they interact provide insights to the microworld in which they live, and may also provide clues to interactions within the real world \cite{coppin}.
Cellular automata are a class of artificial life simulations which are defined by being composed of a world of discrete cells (the agents) which are aware of their neighbors. Each cell may take one of many states representing a stage of the cell's ``life,'' so to speak, and this state along with the states of the cell's neighbors determines which state the cell will take in the next generation \cite{nierhaus,weisstein3}. Most cellular automata are discrete cellular automata, meaning that the entire world---each and every cell---enters the next generation at the same time. There are some variations on this basic rule however, such as cyclic cellular automata in which each cell is iterated in a predefined order, or each row or column is iterated simultaneously but separate from the other rows or columns \cite{nierhaus}. Another iteration strategy is the random cyclical cellular automaton, which is a cyclic cellular automata where the order is randomized by some source of entropy in between each generation. A third variation is the continuous cellular automata, also called discrete automata, whereby every cell is \textit{continuously} iterating, and the entire system never comes to a definable generation, making the state of the entire system at any given point in time difficult to determine \cite{nierhaus}.
In terms of world geometry, several possibilities are common. Most common, perhaps, is the two-dimensional grid of cells, in which each cell may be located in space by an \verb+(x,y)+ Cartesian pair \cite{weisstein3}. This topography lends itself well to ``stitching'' the world into a torus shape in three-dimensional space by attaching the left and right sides of the grid, as well as the top and bottom. In doing this, the world is continuous in all directions, but finite \cite{nierhaus}.
The one-dimensional cellular automata is defined as a single line of cells, each having exactly two neighbors (commonly denoted ``left'' and ``right'' for simplicity). Such a world may also be ``stitched'' so that the ``rightmost'' cell is a neighbor to the ``leftmost'' cell, creating either a ring or a M\"{o}bius strip, depending on personal interpretation.
Hexagonal two-dimensional cellular automata are uncommon but practical, as will be demonstrated in the section on \verb+HexMusicExperiments+ later. These cellular automata are composed of cells placed such that each cell has exactly six neighbors, and the world may be ``stitched'' by connecting opposite edges, though for most purposes this is unnecessary given enough room within the system.
Moving beyond two dimensions, three- and four-dimensional cellular automata exist, but are typically an extension of the two-dimensional grid where the additional dimensions act as a history of the previous generations \cite{nierhaus}.
In terms of what defines a neighbor, there are several definitions provided by various researchers. The von Neumann neighborhood (after John von Neumann, computing pioneer among many other pursuits), when referring to two-dimensional grid cellular automata, is simply defined as the four neighbors in the cardinal directions. The Conway neighborhood (after John Horton Conway, mathematician) expands this definition to also include neighbors in the diagonals. For one-dimensional cellular automata, Stephen Wolfram, a mathematician specializing in cellular automata, has defined the Wolfram neighborhood as the two cells directly adjacent to a cell, with the specification that the ``left'' cell is distinguishable from the ``right'' cell, rather than being lumped together. For hexagonal cellular automata, the neighborhoods are spuriously defined, but one common arrangement is an adaptation of the Conway neighborhood to a hexagonal arrangement, resulting in the six hexagonal cells directly touching a cell \cite{nierhaus}.
\end{doublespace}
\subsubsection{Brief history of cellular automata}
\begin{doublespace}
According to research by Stephen Wolfram, the history of cellular automata as a concrete idea begins around 1951 with John von Neumann \cite{wolfram}. Interested in the idea of self-replication, von Neumann began to search for a system that could replicate itself. He initially thought such a system would require three dimensions, but after conversing with Stanislaw Ulam (an American mathematician) later decided that self replication was quite possible within two dimensions \cite{wolfram}. Following this conversation, he eventually devised between 1952 and 1953 a two-dimensional cellular automaton with an astonishing twenty-nine states per cell that was designed to mimic the reproductive aspects of both digital logic circuits and biological and chemical processes. With this system in place, von Neumann attempted to outline the creation of a machine within this world that would be capable of constructing a copy of itself \cite{wolfram}. His outline called for 200,000 cells, and it appears as though he did not believe that anything less complex would ever be capable of self-replication \cite{wolfram}.
Out of von Neumann's original research, the 1960s saw increased interest in the self-replication problem. More and more increasingly simple cellular automata were devised that were capable of self-replication, and more interestingly, universal computation \cite{wolfram}. These renewed efforts in studies allowed researchers to draw parallels between cellular automata and computing \cite{wolfram}. However, by the 1970s, cellular automata had entered into a phase of recreational study rather than serious research. In fact, it is from 1970 that perhaps the most famous cellular automata, John H. Conway's Game of Life, was brought into existence \cite{wolfram}. Defined as a two-state, two-dimensional cellular automata---in which a ``live'' cell is ``live'' in the next generation if it has two or three ``live'' neighbors in the current generation, a ``dead'' cell is ``live'' in the next generation if it has exactly three ``live'' neighbors in the current generation, and any other cell is ``dead'' in the next generation---Conway's Game of Life injected some life into cellular automata study, but research seemed to be mostly for recreational purposes \cite{wolfram}.
From this point onward, until around 1981 when Stephen Wolfram began studying cellular automata, the field had relatively little scholarly interest \cite{wolfram}. With the publication of Wolfram's first cellular automata paper in 1983, cellular automata received a renewed interest, and since then have been studied by increasingly more researchers \cite{wolfram}.
\end{doublespace}
\subsubsection{Brief history of cellular automata based algorithmic composition techniques}
\begin{doublespace}
Cellular automata based algorithmic composition efforts were first attempted by Peter Beyls. In 1989, he established a set of extended rules for one-dimensional cellular automata that might be usable in music generation \cite{nierhaus}. This system used data input by a user to generate multiple voices of a melody. In 1991, Beyls allowed the user to interact with a two-dimensional cellular automata in real time to affect the musical output of the program \cite{nierhaus}. This program also included self-regulating rule systems which prevented stagnation and over-stability---if the system was too stable, it modified its rules on the fly to create a more dynamic system \cite{nierhaus}. Further, in 2003, Beyls' program \verb+CA Explorer+ utilized one-dimensional cellular automata along with Lindenmayer systems, and refinement through the use of a genetic algorithm, to produce a musical output \cite{nierhaus}. The section on \verb+ALM+ below describes a system that I designed which uses a similar approach in combining these three algorithmic ideas, that was developed without knowledge of Beyls' work.
Following Beyls, Dale Millen produced \verb+Cellular Automata Music+ in 1990 which used the data output by cellular automata of varying dimensions and mapped these points to pitch and duration values to generate music \cite{nierhaus}. Millen later expanded this work, and in 2005 described a new version of the software which implemented an extended version of Wolfram's one-dimensional cellular automata rules \cite{nierhaus}.
In 1993, another cellular automata based system arose from Eduardo Reck Miranda, titled \verb+CAMUS+ (from \textit{C}ellular \textit{A}utomata \textit{MUS}ic, not to be confused with Millen's project above) \cite {nierhaus}. This system uses two different cellular automata running in parallel---Conway's Game of Life and the so-called ``Demon Cyclic Space''---to generate triads \cite{nierhaus}. In \verb+CAMUS+, Conway's Game of Life provides the pitch and duration information of the root note, while the Demon Cyclic Space provides the triadic structure through the use of a set of rules devised by Miranda. In the section on \verb+HexMusicExperiments+ below, a simplified approach to this type of project is set forth utilizing three parallel cellular automata of varying dimensions and rules to generate ``chords'' of loosely coupled notes.
\end{doublespace}
\subsection{Markov chains}
\subsubsection{What are Markov chains?}
\begin{doublespace}
Markov chains belong to the fields of statistical analysis and probability. A Markov chain (also known as a Markov model) takes the form of a matrix that demonstrates with which probability a system at the current time $t$ with the current state will enter a given state at the next time interval, $t+1$. The difference between Markov models and more cut and dried probability charts is that Markov models work stochastically, to generate probabilities for the next state given a number of previous states. For instance, Nierhaus demonstrates a simple Markov model for weather prediction, which based upon today's weather provides probabilities for the weather tomorrow \cite{nierhaus}.
Markov models are classified as $n$th-order Markov models, where $n$ is the amount of previous states the probabilities depend upon. The previous example of the weather prediction model is a first order Markov model, as it depends solely upon the weather today---one previous state. Markov models of any arbitrary number of states are possible, though as a rule the higher the order of a Markov model, the more specific and inflexible its output becomes \cite{nierhaus,weisstein4}.
A subclass of Markov models are known as hidden Markov models, where the output of the model is plainly observable, but the internal workings of the system itself are obscured. Such a system may occur in a situation where not all agents in the system are aware of the states of the others. To extend upon the weather prediction example above, the model may predict that there will be a thunderstorm tomorrow, even given sunny skies today, as there may be hidden information like barometric pressure informing the model that the user is unaware of. Hidden Markov models make for obscured, informed predictions that make sense in a fuzzy logic sort of way \cite{nierhaus}.
\end{doublespace}
\subsubsection{Brief history of Markov chains}
\begin{doublespace}
The development of Markov chains seems a bit convoluted, but through tracking papers published from the eighteenth through twentieth centuries, a sort of mathematical lineage may be uncovered. In 1738, a Russian mathematician named Daniel Bernoulli wrote a paper on probability theory---the first written in the Russian language on the topic---planting the seeds for mathematicians in Russia to begin exploring the area of probability \cite{basharin}. However, this field went unnoticed for a little over a century until 1846, when V. Y. Bunyakovsky wrote the first textbook in the Russian language on probability theory \cite{basharin}. Also in that same year, the first Russian dissertation on probability theory was completed by Pafnuty L. Chebyshev at Moscow University \cite{basharin}. Chebyshev went on to become Andrei Andreevich Markov's teacher and mentor \cite{basharin}.
Markov, from whom Markov chains get their name, built upon the works of several mathematicians before him, even if Markov himself was unaware of the fact. In 1900, the first edition of Markov's seminal textbook ``Calculus of Probabilities'' reached publication \cite{basharin}. With this, he established himself in prime position to begin upon more serious theoretical work. By 1906, he had written (and in 1907 had published) his first paper on ``simple chains'' which consisted of only two states, \verb+0+ and \verb+1+ \cite{basharin}. This paper had never used the term ``Markov chain,'' which appeared first in a 1926 paper by S. N. Bernstein \cite{basharin}. After this paper's publication, Markov's colleague and correspondent A. A. Chuprov pointed out in a letter to Markov several predecessors of the same theories which Markov had arrived at, including work by D. Bernoulli, Laplace, Ehrenfests, the concept of Brownian motion, Bunyakovsky's random walks, and Bachelier's study of the stock exchange \cite{basharin}. Markov, apparently unaware of these previous works, began to study them whenever he could procure their articles, and as a result began to refine and redevelop his chaining theory \cite{basharin}.
In 1913, the third edition of ``Calculus of Probabilities'' was published, containing the first real-world application of Markov's chains \cite{basharin}. Markov analyzed the probabilities of vowels following vowels and vowels following consonants in two staggeringly large works: A. S. Pushkin's poem ``Eugeny Onegin'' (consisting of 20,000 letters) and S. T. Aksakov's novel ``The Childhood of Bagrov, the Grandson'' (consisting of 100,000 letters) \cite{basharin}. Being the early twentieth century, Markov would have had all of this calculation by hand, but that did not prevent him from providing this persuasive example \cite{basharin}. Markov found that in Pushkin's poem, any given letter had a probability of $0.432$ of being a vowel, and through the use of his chains, he found that there was a probability of $0.128$ that a vowel would follow another vowel, and a probability of $0.663$ that a vowel would follow a consonant \cite{basharin}. In Aksakov's work, he found a probability of $0.449$ that any given letter could be a vowel, and through the use of his chains again, found a probability of $0.552$ that a vowel would follow another vowel, and a probability of $0.365$ that a vowel would follow a consonant \cite{basharin}.
\end{doublespace}
\subsubsection{Brief history of Markov chain based algorithmic composition techniques}
\begin{doublespace}
Markov models in algorithmic composition frequently fall into the style-replicationist school. Harry F. Olson was the pioneer of using Markov chains to compose, in 1950. He used Markov analysis to generate first- and second-order models of melodies by Stephen Foster. The resulting models show a much increased ability to compose coherent and harmonious pieces when comparing the second-order model with its first-order counterpart \cite{nierhaus}.
The 1950s held several other Markov-related algorithmic composition efforts. In 1955, Lejaren Hiller and Leonard Issacson began work upon their ``ILLIAC Suite,'' widely regarded as the first completely computer-composed work of music \cite{nierhaus}. In the fourth part of the suite, Markov models of varying orders are utilized to generate musical structure \cite{nierhaus}. Iannis Xenakis, an algorithmic composer, used Markov models beginning in 1958. This led to the work ``Analogique A,'' in which Markov models are used to arrange the segments of different sonic density \cite{nierhaus}. Xenakis also used Markov models in the pieces ``Analogique B'' (1958) and ``Syrmos'' (1959) \cite{nierhaus}.
Also of note is the 1993 effort of F. P. Brooks, A. L. Hopkins, P. G. Neumann, and W. V. Wright. These four used a body of thirty-seven chorale melodies (all in common time which contain no notes shorter than an eighth) to produce a number of Markov models of varying degrees (between first- and eighth-order), and subsequently utilized these models to generate compositions \cite{nierhaus}. One important result of this work is concrete evidence for the problems of Markov models in algorithmic composition: using too low of an order tends towards incoherence and randomness, while too high of an order runs the risk of simply rewriting the piece without offering any insights or creative differences between the source and the finished product \cite{nierhaus}.
Hidden Markov chains have had their fair use in algorithmic composition as well. In 2000, Martin Hirzel and Daniela Soukup used a hidden Markov model to generate jazz improvisations from patterns entered into their program by the user. These patterns are modified based upon previous training by a specific piece (``Forest Flower (Sunrise)'' by Charles Lloyd) through an algorithm whose implementation is hidden from the user \cite{nierhaus}. Mary Farbood and Bernd Schoner in 2001 utilized hidden Markov chains to generate counterpoint in relation to a melody entered by the user. The program processes this melody via a hidden algorithm, and produces an output consisting of a counterpoint line \cite{nierhaus}.
\end{doublespace}
\subsection{Lindenmayer systems}
\subsubsection{What are Lindenmayer systems?}
\begin{doublespace}
A Lindenmayer system is a formal generative grammar with certain integral constraints upon their implementation \cite{nierhaus}. Defining an L-system involves defining a three-tuple of values: the alphabet (a list of symbols which acts as both the input and the output lexicon for the grammar), the transformations (a set of rewrite rules defined in the following manner: \verb++ $\rightarrow$ \verb+