Ludo2017 – Friday 21st April

The second day of Ludo2017 followed the same format as the first day, and I had the chance to speak to more people and listen to some even more interesting talks. I got the impression that music conferences rarely have such a friendly atmosphere, and I feel that this has something to do with the subject matter. Most of us are gamers ourselves and as well as a vested interest in musicology there’s also a lot of nostalgia and emotional feelings surrounding the games and game genres mentioned over the three days. I noticed that many of the ‘ooh’s and ‘aah’s came when recognising a piece of music from a game beloved in our childhood, or a particularly memorable scene either for its sense of heartbreak, an iconic reveal, or simply for its comedy value. I personally felt a deep tug when I saw the aerial shot of the Imperial Palace from The Elder Scrolls IV: Oblivion, accompanied by Jeremy Soule’s famously dramatic theme.

I arrived at the conference slightly late on Friday 21st, and so unfortunately missed James Tate’s paper on the video game music canon. Some interesting points are raised in the questions, however, that so called ‘AAA’ games (pronounced ‘triple A’ and meaning games that have the highest budgets and levels of production) have trouble becoming canonised because of the sheer number of them. For example, the Assassin’s Creed franchise is now exceptionally well known and has many titles under its name, but the games themselves perhaps have less sentimental value as more sequels appear. For me personally, the classic AC games are the first one (Assassin’s Creed) and the second ones (Assassin’s Creed II, Assassin’s Creed Brotherhood, and Assassin’s Creed Revelations). After that, I believe the developers sacrificed plot and script quality for gimmicks and sensationalism. The video game business is, after all, a business, and they obviously saw a formula that worked. The controls also became simpler in Assassin’s Creed III, and after that the plot became so convoluted and far away from the original game that I have trouble connecting with the characters. I miss Desmond. Certainly the AC franchise as a whole is part of the video game canon, but not the individual games themselves and this, I believe, is partly due to the number of them.

James Cook’s paper, ‘Sonic Medievalism and Cultural Identity in Fantasy Videogame’, was particularly engaging as the fantasy genre is one I have identified with for a long time. I’ve enjoyed hours and hours of Dragon Age: Origins and The Elder Scrolls series has a special place in my heart, but James speaks about a game I haven’t played: The Witcher III. He talked about how the fantasy genre takes influence from the medieval in both sound and aesthetic, such as architecture and mythical beings. Some of the musical techniques used in the game come in the form of layering; for example, the same battle music will play when faced with an enemy, but if the enemy is small only the first level of music will play. I assume that more layers are added on according to the difficulty of the enemy. James also highlighted some stereotypes within the fantasy genre, such as northernmost parts of the game being associated with hardiness and deep-rooted traditions, whereas southern countries are known for their wealth, lavishness and comparative femininity. Characters from rural areas often have west country accents, and characters from the north will often have accents from Yorkshire. Wealthy characters, if they have a British accent, will often use received pronunciation found in the home counties of England. This contrast between wealthy and not can be seen in the cities of Oxenfurt and Novigrad which, although the two cities are situated in the north, are two of the most affluent cities in the game and this shows through their architecture. While the rest of the game takes influence from the medieval, Oxenfurt and Novigrad clearly represent the beginning of the Renaissance and the dawn of science, art, philosophy, music, and other intellectual pursuits. The choice to give Oxenfurt and Novigrad this Renaissance aesthetic separates them from the rest of the game, giving them a perceived ‘higher’ status. I’m interested to play this game and see these contrasts for myself.

A highlight of the Friday was listening to British game composer Rob Hubbard giving advice to aspiring composers. I’m not really a composer myself but it was interesting to listen to an expert in the field on the do’s and do not’s of video game composing. He criticised the composers who have no musical knowledge, saying that while you do have to be experimental with your music, you also have to make musical sense. Music theory must make up the foundation of your work, otherwise you risk writing weak pieces. Arguing further for the practice of theory, Hubbard stated that having theory to back up your creativity was important for motivation, as trying to keep momentum without any knowledge is a bit like trying to get a car to its destination by rolling it down a hill with the engine off. The nature of video game music is repetitive; players hear the music time and time again, loop after loop (which is probably why so many of us have such emotional ties to certain music), so the least the composer can do is make it interesting. Again, this comes from having musical knowledge and being able to vary the key, rhythms, and tonal areas around. No one wants to hear annoying video game music over and over and annoying music that uses terrible clichés is even worse. Immersing yourself in the video game music world is also an essential part of being a composer; by simply listening to video game music all the time you will absorb the style and rules of the trade, and the more you write the better you will become at decision making and the faster you will compose. It really was a privilege to hear Rob talk about his work from a realistic but encouraging standpoint.

It was another successful and enlightening day at Ludo2017 and I’ve heard many varied and fascinating perspectives, from analysing the sensory overload ‘brostep’ music of the Major League Gaming videos and their parodies to the Elvis-obsessed Kings of Fallout: New Vegas. The third day was just as exciting and I wish I could write about it all here.

For more information about the Ludomusicology Research Group, visit ludomusicology.org, where you can sign up for email updates and find contact information. If you’re a musicologist with an interest in video game music, I’d encourage you to have a look at the SSSMG (Society for the Study of Sound and Music in Games), a new society with the aim of connecting researchers and professionals in the field for the understanding of music within video games, at www.sssmg.org,

Ludo2017 – Thursday 20th April

This year I have the privilege of attending the annual Ludomusicology conference in Bath, UK, organised by the Ludomusicology Research Group which is run by Tim Summers, Michiel Kamp, Mark Sweeney, and Melanie Fritsch. Having referenced Summers and Fritsch in my undergraduate dissertation I’m very pleased to be here and listening to some of the most current research in the video game music field.

The day starts with a friendly registration where I get a badge and a booklet containing all the abstracts for the next three days (I also meet James Cook, a musicologist and lecturer who taught briefly at my university, and it’s nice to see someone I know especially as this is a pretty niche field at the moment). At about half past nine the party is whisked upstairs to one of the lecture rooms, where Tim greets us all formally and introduces the first speaker. It seems I needn’t have worried about my jeans being more than a little faded as it’s all very informal; I get the feeling that of all the fields of music this is probably one of the least snobbish.

The first speaker is Blake Troise, with a paper entitled ‘Beeper Music: The Compositional Idiolect of 1-Bit Music’. It goes a little over my head as I’ve had almost no experience with music technology (give me a blank Sibelius file and I’ll be alright), but it’s hugely interesting to hear the different methods for getting around the technical limitations of 1-bit music and creating polyphony out of something that is naturally monophonic. The rest of the first session is equally as technical and I start to worry that maybe I’m not cut out for this after all, but then a topic comes up that I wrote about extensively in my undergrad dissertation; diegesis.

(Just for clarification and also for me because I literally had to Google this before typing it as I can never remember which one is which: diegetic means music or sound that is heard in-game, by the characters and is a part of the action, and non-diegetic means music or sound that comes from outside and is only meant to be heard by the player. A good example of this is in Fallout, where the radio in the Pip-Boy is supposed to be heard by the character, whereas the general game soundtrack is only for the benefit of the player and is not heard by the character.)

Stephen Tatlow comes to the front with his paper ‘Diegesis and the Player Voice: Communication in Fantasy Reality’, arguing that when the playable character has no voice, the voice of the player, when using a microphone, essentially becomes part of the on-screen action and, in effect, becomes the character themselves. He questions why player voice, especially in MMORPGs, was not considered diegetic sound when it is most often used to direct other players to complete quests or do certain actions. An argument against that, he states, is that a fundamental concept of RPGs is immersion. Introducing an element of the real-world, for want of a better phrase, has the tendency to break immersion and can ruin the experience. My ears pricked up at this point as I thought about my own experience with World of Warships, as I lose many more battles than I win when playing alone – but with much more experienced friends, I find myself playing better and winning a lot more. Those disembodied voices are, for all intents and purposes, part of my particular game. In this way, every single player will have a different experience of a multiplayer or MMO depending on the people they are playing with, and the method and language used for communicating can either make or break the fantasy barrier.

The keynote speaker, Kenneth McAlpine, gives an hour long lecture on chiptune music and goes into detail about his own childhood experiences with the C64 and Spectrum, and I’m now kicking myself for not writing more things down as it was honestly so interesting I forgot to write. One thing he mentioned does stick in my head; that chiptune music is now so associated with the 1980s that it’s impossible to take it out of that environment and not have it sound retro. It got me thinking about Fallout 4 (again) and the use of various beeps, boops, and distortions used for the Pip-Boy sounds. I wondered why the developers chose to use those as opposed to more sophisticated sounds which I believe would be more likely used in the far future, especially for a portable computer. I concluded that it was probably because it was more important to give the player something they could relate to, and if chiptune music is so associated with times gone by, then what better to show the passage of time than an outdated method of composition?

Until lunch, it all goes a bit technical again and while I’m trying my hardest I do lose track a bit. Ben Jameson gives an interesting analysis of his composition ‘Construction in Metal’, a piece for electric guitar and Guitar Hero controller which challenges the authenticity of performance for both the ‘real’ guitar and the ‘fake’ guitar, and there’s a fascinating talk by Ricardo Clement about composing music using game engines. Ricardo finishes his paper at about quarter past one, and we break for lunch.

I have to post a picture of my lunch. For real. Look at this:

ludo lunch

I’ve never been happier. It was delicious. I have no idea what was in that pie but it was amazing.

After (a frankly incredible) lunch, it’s time for the ‘Realities and Spaces’ part of the talks, and the final section of today’s conference. The last paper, given by Elizabeth Hambleton and titled ‘Levels of Reality and Artifice in The Talos Principle‘ is one of my favourites of the day; Elizabeth goes into detail about the methods used in The Talos Principle to convince you of the reality of the game’s overworld. She also mentions the use of a choir whenever Elohim, the disembodied voice of God which speaks to your character everywhere they go, is heard. It may be that this choir affirms the position of Elohim as the benevolent creator of all things, perhaps making him seem more trustworthy; I’m reminded of church choirs and how religion is so often accompanied by music, so hearing this choir would make it seem that everything was in order. Elizabeth shares some spoilers about the ending which I won’t reproduce here, but it looks like a fascinating game for more than just the music and I’ll be purchasing it the next time Steam has a sale.

We wrapped up the day there, and went to the next room for a small drinks reception. I left quite quickly as I wasn’t staying for the dinner, but it’s been a fascinating first day. Personally I’m quite glad that lots of the music tech papers have been put on the first day, because I’m much more interested in what’s going to come tomorrow and Saturday. It’s amazing to hear studies on video game music from so many different perspectives, and I’m sure I’ll be leaving on Saturday enlightened and with many ideas for my own personal study.