During this paper I have described the construction of three reactive musical engines that provide ways to produce continuous music throughout a variety of games scenarios and transitions between these scenarios. In the introduction to this paper it was shown that music has the power to aid in the creation of an immersive virtual reality for the player. It was also shown that scholars had noted an abruptness to some transitions within games where music was cut short to meet the change in the game’s state. Crossfading between two musics during a transition of game state shows that some efforts have already been made in the industry to lessen the abruptive impact in these situations. Even considering the crossfade as a form of solution, the musical transition will still appear abrupt (see Final Fantasy VII demonstration video) as musical style can instantaneously change and therefore both still result in a lessening of the immersive effect by the game on the player. It can be seen that even though the subjects of the case studies in this paper span over three decades, this method of abrupt musical curtailment is inherent in modern games including the subject of the case study in Chapter Three, Rogue Legacy. The prevalence of these mechanics in modern games (see Chapter One, The Binding of Isaac and FTL: Faster Than Light), coupled with the aesthetic judgment that they are an inadequate musical and programmatic solutions, given current technology, provides justification for this study.[1] Although this study uses some scientific methodology and terminology, successes are based on my own aesthetic concerns about music and my experience, both as a composer of music, and player of video games.

The branching music system discussed in Chapter One uses a musical pixellation technique to separate a continuous score of music (an archbranch) into separate small segments in order to allow the program to know its location during play of the music. Coupling this technique with other archbranches and capillary branches allows pre-composed visually reactive transitioning to take place within the game’s music. This builds on scripting solutions, such as iMUSE, and some of the branching capabilities of the FMod software by proposing the concept of high musical resolution where the distance between musical pixels is temporally short. It succeeds in the speed in which a branching music engine with a high resolution can adapt to the changing game states with pre-composed music. The branching music engine is potentially limited by its need for large amounts of hard drive space which may not suit gaming platforms with small hard drives, such as iOS devices or other hand held systems. A further limit arises for the composer who must produce large amounts of material for such a potential-music system; this would be a time consuming process for any single composer. Some attempt at addressing this was made in Chapter One by way of heuristic methods applied across multiple branches that may be composed similarly.

The generative music system proposed in Chapter Two suggests one method for creating static musical personalities that can be attached to locations or states within a game. The system is also able to blend between these personalities while creating music for this blend. This is dissimilar to a crossfade as a blend in musical style also takes place rather than simply a blend in amplitudes. The engine applies the concept of artificially improvised musical states to the state-based medium of video games. Its success is found in the interesting level to which a style blend can take place between two musical personalities. GMGEn can also act as a tool for creating new musical personalities which can be transitioned to, or from, within the engine. GMGEn is limited in that it can only produce musical interest for a relatively short duration of time (less than two minutes) before that music starts to become monotonous for the listener. Due to the way it produces personalities of music, the microcosm of the music is impossible to remember for a longer period of time, such as multiple sittings or playthroughs. Due to this, GMGEn is limited in that it is unable to allow the composer the opportunity to exploit motivic attachment techniques that can increase the immersive power of the music, and therefore the game, on the player.

The intermittent music engine combines elements from the branching and generative engines designed in Chapters One and Two respectively. Hybridisation allowed the long-term memorability of the branching music system to act alongside the reactive agility provided by the generative engine artificially improvising appropriate musical personalities. As weaknesses of the individual engines are removed by the combination the intermittency engine achieves the final degree of success this paper set out to achieve. The intermittency engine produces fully reactive music that has memorable microcosms allowing the composer to use musical attachment composition techniques. Coupled with the smooth blending that the generative side of the engine can produce, these compositional techniques create a more reactive dynamic music that can enhance a player’s immersion in the game.

A recent article on the Imagine Games Network (IGN) website titled ‘2013: The Year of the Video Game Story’ shows how quickly innovation is effecting the video-game world.[2] The article demonstrates many new innovative models for narrative within games being released across a single year. Games such as The Stanley Parable, where the narrative is seemingly unending; Paper’s Please, where your position as a immigration inspector makes you the indirect, narrative arbiter; and The Last Of Us, where narrative communication is as much made by the body language of the characters as by the dialogue.[3] These games all prove that innovation in narrative interaction is currently very rich. This paper has discussed several techniques pertaining to the improvement of dynamic music to accompany these narratives. This is not to illustrate any failing of past games or their musical design but to highlight a potential next step for the future of dynamic music for modern titles. The use of the Final Fantasy case study was due both to its lasting tradition and its impact on gaming as a whole. Games immerse new players each day, some of these players may have never experienced the same levels of immersion from other forms of media. Music’s power to affect in these contexts gives the designers of audio engines and the composers writing music for them a great responsibility to further progress the dynamic nature of the score. I suggest that to whatever degree we are able to affect a listener with non-dynamic music in video games, a yet greater degree is obtainable when using a more versatile dynamic-music.


[1] The Binding of Isaac, PC Game, Headup Games, 2011; and FTL: Faster Than Light, PC/iPad Game, Subset Games, 2012.

[2] L O’Brien, ‘2013: The Year of the Video Game Story’, in Imagine Games Network (IGN), 26th October 2013. viewed 30th October 2013,

[3] O’Brien; and The Stanley Parable, PC Game, Galactic Café, 2013; and Papers, Please, PC Game, Lucas Pope, 2013; and The Last of Us, PlayStation 3/PlayStation 4 Game, Naughty Dog, Inc., Virtuos Ltd., 2013.

Back To Top