On 8-9 October took place the PLAY Expo event in Manchester, where The Sound Architect organised and put together a full 2 days of presentations and interviews. I had the opportunity to attend and listen to the valuable insights shared by the guest speakers and interviewees, as well as discuss and socialise with fellow game audio professionals. It was overall a successful event and a lovely weekend, allowing passionate people to get together and exchange knowledge. Here is my brief summary of the event.
Saturday 8 October
11:00 Presentation: Ash Read – Eve: Valkyrie
The weekend started with Ash Read, sound designer at CCP working on Eve: Valkyrie, telling us about his experience with VR audio.
We were first enlightened on some aspects in which VR audio differs from ‘2D’ or ‘TV’ audio, and briefly what the ‘sonic mission’ consists of in this context. Specifically in Eve: Valkyrie, a chaotic space battle environment where a lot is happening, constantly, everywhere, the role of audio includes:
- Keep the pilot (player) informed
- Keep the pilot (player) immersed
In a visually saturated environment, audio is a great way to maintain focus on the important gameplay elements and help the player remain alert and immersed.
What is also different in VR audio, is a greater level of listener movement, so that techniques need to be developed to implement audio in a context where the listener’s head doesn’t stay still. One of these techniques involves HRTFs (Head Related Transfer Functions).
Put shortly, the HRTFs help the listener locate where the sound is coming from and detail 3D positioning, but also more accurately portrays subtle modifications to sound while traveling.
For instance, the distance and positioning of an object is not only expressed sonically through attenuation, but also by introducing the sound reflections of a specific environment, and by creating a sense of elevation.
We then learned about how audio in VR may contribute to reducing the motion sickness often related with VR, while it helps the visuals to compensate for the feeling of disconnect, partly responsible for the motion sickness.
Since VR usually means playing with headphones on, the Valkyrie audio team decided to include some customisable audio options for the player, such an audio enhancement slider, which helps bringing focus onto important sounds.
The sound design of Valkyrie is thought to be rugged, to tell about the raw energy of the game, and to be strong in details. With that in mind, the team is constantly aiming to improve audio along with the game updates. For instance, they plan to breathe more life into the cockpit by focusing on its resonance and enhance the deterioration effects.
Ash’s presentation was concluded with a playback of their recently released launch trailer for PS VR, the audio for which was beautifully done by Sweet Justice Sound.
You can watch the trailer here: https://www.youtube.com/watch?v=AZNff-of63U
12:00 Presentation: Simon Gumbleton – PlayStation VR Worlds
Technical sound designer Simon Gumbleton then followed to tell us about the audio design and implementation in Sony’s PlayStation VR Worlds.
The VR Worlds game is rather like a collection of bespoke VR experiences, each presenting a different approach to player experience. Over the course of the development of those various experiences, the dev and audio teams have experimented, learned, and shaped their approaches, while exploring uncharted territories and encountering new challenges.
1st experience: Ocean Descent
Being the first experience they worked on, it laid the foundation of their work, and allowed for experimentation and learning. The audio team then developed some techniques such as the Focus System, where the listener would start to hear accentuated details of what’s in focus after a short amount of time (of it being in focus). You could see it as a game audio implementation of the cocktail party effect.
They also developed a technique concerning the player breathing, where they introduce breathing sounds at first, and eventually pull them out once the player has acclimated to the environment, where they become somewhat subconscious.
Similarly, they explored ways to implement avatar sounds, and found that, while they usually reinforce the player in the world, in VR there is a fine line between them being reinforcing or distracting. In short, the sounds heard need to be reflected by movements actually seen in game. This means that you would only hear avatar sounds related to head movements which have a direct impact on visuals, as opposed to body movements which you cannot see.
2nd experience: The London Heist
In this experience, there was more opportunity to experiment with interactive objects. To design believable audio feedback and to improve the tactile one to one interactions.
In order to do so, they implemented the sound of every interactable object in multiple layers. For instance, a drawer opening won’t be recorded as one sound and then played back on the event of opening this drawer in game. This drawer can be interacted with in many ways, so its sounds are integrated with a combination of parameters and layers in order to playback an accurate sonic response for the type of movement generated by the player’s actions.
Another example is the cigar smoking being driven by the player’s breathing. The microphone input communicates with the game and drives the interaction with the cigar for optimal immersive experience.
A detailed foley of the characters also proves to be something that helps bringing characters to life. Every detail is captured and realised, down to counting the number of rings on a character’s hand and implement its movement sounds accordingly.
Dynamic reverb tells the player info about the space and the sounds generated in it. A detailed and informative environment is created with the help of physically based reflection tails, as well as material dependent filters, all processed at run time. It’s all about making the environment feel more believable.
3rd experience: Scavengers Odyssey
This experience was developed later, so they were able to take their learnings from the previous experiences and apply them, and even push the limits further.
For instance, since this experience is taking place in space and there is no real ‘room’ to generate a detailed reflection based reverb, they focused on implementing the sound as if it was heard through the cockpit.
Simon also emphasized on how detail is important, while in VR, the player will subconsciously have very high expectations of detail. The way this is achieved is through lots of layering, and many discrete audio sources within the world.
Such detail inevitably brings tech challenges in relation to the performance of the audio engine, which will require a lot of optimisation work.
The ambiences have been implemented fully dynamically, where textures are created without any loops and are constantly evolving in game.
In terms of spatialisation, they tied all the SFX to the corresponding VFX within the world for optimal sync and highly accurate positioning.
They also emphasized important transitions in the environment by adding special transition emitters in critical places.
Music
As for the music, they experimented in regards to its positioning, whether it should be placed inside the world or not, and mostly proceeded with quad array implementation when in passive environments.
They did have some opportunity to experiment with the unique VR ability to look up and down, for instance in Ocean’s Descent where they accentuated the feeling of darkness and depths VS brightness and light when looking up and down in the water with adaptive music.
The Hub
This interactive menu is an experience in itself. It is the first space you are launched into when starting the game, and sets up the expectations for the rest. They needed to build a sense of immersion already, and put the same level of detail into the hub as anywhere else in order to maintain immersion when transitioning from one experience to another.
Finally, this collection of experiences needed to remain coherent overall and maintain a smoothness through every transition. This was accomplished through rigorous mixing, and by establishing a clear code regarding loudness and dynamics which would be applied throughout the entire game.
PlayStation VR Worlds is due to be released on 13 October 2016, you can watch the trailer here: https://www.youtube.com/watch?v=yFnciHpEOMI
13:00 Interview: Voice Actor, Alix Wilton Regan – Dragon Age, Forza, Mass Effect, LBP3
Alix Wilton Regan told us about voice acting in video games in the form of an interview, lead by Sam Hughes.
Some thoughts were share about career paths, working in games VS in television, and some tips were shared for starting actors.
Alix Wilton Regan has started a fundraising campaign, a charitable initiative to help refugees in Calais, check it out!
https://gogetfunding.com/play-4-calais/
14:00 Interview: Composer, David Housden – Thomas Was Alone
Another interview followed with David Housden, composer on Thomas Was Alone and Volume. The interview was held in a similar way, starting with some thoughts on career progression, following with some details about his work on past and current titles, and concluding with advice on freelancing.
15:00 Presentation: Composer & Sound Designer, Matt Griffin – Unbox
Composer Matt Griffin then presented how the sound design and music for the game Unbox was implemented using FMOD.
One of the main audio goals for this entertaining game was to make it interactive and fun. In order to do so, Matt found ways to make the menus generative and sometimes reactive to timing, such as the menu music.
We were shown the FMOD project and its structure to illustrate this dynamic implementation. For the menu music, the use of transitions, quantizations and multi sound objects was key.
For the main world music, each NPC has its own layer of music, linked to a distance parameter. Some other techniques were used in order to make the music dynamic, such as having a ‘challenge’ music giving the player feedback on progression and timing, and multiplayer music with a 30 seconds countdown double tempo.
In terms of sound design, the ‘unbox’ sound presented a challenge while it is very frequently played throughout the game. In order to not make it too repetitive, it was implemented using multiple layers of multi sound objects, along with pitch randomisation on its various components and a parameter tracking how many ‘unboxes’ were heard so far.
An extensive amount of work was also realised for the box impact sounds on various surfaces, taking velocity into account.
For the character sounds, a sort of indecipherable blabber, individual syllables were recorded and then assembled together in game using FMOD’s Scatterer sound object.
16:00 Interactive Interview: Martin Stig Andersen – Limbo
Martin Stig Andersen, composer and sound designer on Limbo and Inside was then interviewed by Sam Hughes.
Similarly to the previous interviews, some questions relating to career paths were first answered, relating how Martin started in instrumental composition, shifted towards electroacoustic composition (musique concrète), and later into experimental short films.
His work often speaks of realism and abstraction, where sound design and music combine to form one holistic soundscape.
Martin explained how he was able to improve his work on audio for Inside compared to Limbo as he was brought onto the project at a much earlier stage, and was able to tackle larger tech issues, such as the ‘death-respawn’ sequence.
More info on the death-respawn sequence in this video : http://www.gdcvault.com/play/1023731/A-Game-That-Listens-The
Some more details were provided about the audio implementation for Inside, for instance the way the sound of the shock wave is filtered depending on the player’s current cover status, or how audio is used to communicate to the player how well he/she is doing in the progression of the puzzle.
We also learned more about the mysterious recording techniques used for Inside involving a human skull and audio transducers.
More details here: http://www.gamasutra.com/view/news/282595/Audio_Design_Deep_Dive_Using_a_human_skull_to_create_the_sounds_of_Inside.php
17:00 Audio Panel: Adam Hay, David Housden, Matt Griffin
The first ended with a panel with the above participants, sharing some thoughts on game audio in general, freelancing, and what will come next.
Sunday 9 October
11:00 Interview & Gameplay: Martin Stig Andersen – Limbo
The day started by inviting Martin Stig Andersen again to the stage, where the interview was roughly the same as the previous day.
12:00 Interview: Nathan McCree, Composer & Audio Designer
At midday, the audience crowded up as the composer for the first three Tomb Raider games was being interviewed by Sam Hughes.
Some questions about career progressions were followed by some words about the score and how Nathan came to compose a melody that he felt really represented the character.
The composer also announce The Tomb Raider Suite, a way to celebrate Tomb Raider’s 20th anniversary through music, where his work will be played by a live orchestra before the end of the year.
More details here:
http://tombraider.tumblr.com/post/143228470745/pax-east-tombraider20-announcement-the-tomb
13:00 Presentation: Voice Actor, Jay Britton – Fragments of Him, Strife
Next, voice actor Jay Britton gave us a lively presentation on the work of a voice actor in video games, involving a demo of a recording session. He gave us some advice on how to get started as a voice actor in games, including:
- There is no one single path
- Start small, work your way up
- Continually improve your skills
- Network
- Get trained in videogame performance
- Get trained in motion capture and facial capture
- Consider on-screen acting
- Speak to indie devs
- Get an agent
He followed by giving advice on how to come up with new voice character with your own voice, while giving us some convincing demonstrations.
14:00 Interview: Audio Designer, Adam Hay – Everybody’s Gone To The Rapture
Sound designer Adam Hay was then interviewed about his work on both Dear Esther and Everybody’s Gone To The Rapture.
He mentioned how the narrative journey is of crucial importance in both these games, and how the sound helps the player progress through them.
16:00 Audio Panel: Simon Gumbleton, Ash Read, David Housden
Finally, the weekend ended (before giving the stage to live musicians) with a VR audio panel, giving us some additional insight on the challenges surrounding VR audio, such as the processing power involved in sound spatialisation, and how everything has to be thought through in a slightly different way than usual.
Voilà, a very busy weekend full of interesting insights and and advice. A massive thanks to The Sound Architect crew for putting this together, hopefully this can take place again next year! 🙂