I just came back from Brighton for the Develop: Brighton game dev conference. I was there only on Thursday 14 July for the Audio Day, and here are my thoughts and brief summary.
The Audio Track was incredible, lining up wonderful speakers with so much to say!
The day started at 10 am with a short welcome and intro from John Broomhall (MC for the day), and a showing of an excerpt from the Beep Movie to be released this summer. Jory Prum was meant to give the introduction but very sadly passed recently from a motorcycle accident.
The excerpt presented hence showed him in his studio talking about his sound design toys:
10.15 am – Until Dawn – Linear Learnings For Improved Interactive Nuance
The first presentation was given by Barney Pratt, Audio Director at Supermassive Games, telling us about the audio design and integration in their game Until Dawn.
We learned about branching narrative and adapting film edit techniques for cinematic interactive media, dealing with Determinate VS Variable pieces of scenario.
Barney gave us some insight on how they created immersive Character Foley using procedural, velocity-sensitive techniques for footsteps and surfaces, knees, elbows, wrists and more. The procedural system was overlaid with long wav files per character for the determinate parts, providing a greatly realistic feel to the characters’ movements.
He then shared a bit about their dialog mixing challenges and solutions: where center speaker dialog mix and surround panning didn’t exactly offer what they were looking for, they came up with a 50% center biased panning system which seemed to have been successful (we heard a convincing excerpt from the game comparing these strategies). Put simply, this ‘soft panning’ technique provided the realism, voyeurism and immersion required by the genre.
Finally, Barney told us about their collaboration with composer Jason Graves to achieve incredible emotional nuances, from techniques once again inspired from film editing.
For instance, they wanted to avoid stems, states and randomisation in order to respect the cinematic quality of the game, as opposed to the techniques used for an open-world type of game.
The goal was to generate a visceral response with the music and sound effects. After watching a few excerpts, even in this analytic and totally non-immersive context, I can tell you, they succeeded. I jumped a few times myself and, although (or maybe because) the audio for this game is truly amazing, I will never play it, as to do so will prevent me from sleeping for weeks to come….
11.20 am – VR Audio Round Table
Then followed a round table about VR audio, featuring Barney Pratt (Supermassive Games), Matt Simmonds (nDreams) and Todd Baker (Freelance, known for Land’s End).
They discussed 3D positioning techniques, the role and place of the music, as well as HRTF & binaural audio issues. An overall interesting and instructive talk providing a well appreciated perspective on VR audio from some of the few people among us who have released a VR title.
12.20 – Creating New Sonics for Quantum Break
The stage then belonged to Richard Lapington, Audio Lead at Remedy Games. He revealed the complex audio system behind Quantum Break‘s Stutters – those moments during gameplay when time is broken.
The team was dealing with some design challenges, for instance the need for a strong sonic signature, the necessity of being instantly recognisable, and convincing. In order to reach those goals, they opted to rely on the visual inspiration the concept and VFX artists were using as a driving force for audio design.
Then, when they came up with a suitable sound prototype, they reversed engineered it and extrapolated an aesthetic which would be put into a system.
This system turned out to be an impressive collaboration between the audio and VFX team, where VFX was driven by real time FFT analysis operated by a proprietary plugin. This, paired with real time granular synthesis, resulted in a truly holistic experience. Amazing work.
// lunch //
I went to take a look at the expo during lunch time and tried the Playstation VR set with the game Battlezone from Rebellion.
I only tried it for a few minutes so I can’t give a full review, but I enjoyed the experience, the game was impressive visually. Unfortunately couldn’t get a clear listen as the expo was noisy, but I had enough of a taste to understand all that could be done with audio in VR and the challenges that it can pose. Would love to give this try…
2 pm – The Freelance Dance
The afternoon session started off with a panel featuring Kenny Young (AudBod), Todd Baker (Land’s End), Rebecca Parnell (MagicBrew), and Chris Sweetman (Sweet Justice).
They shared their respective experiences as freelancers and compared the freelance VS in-house position and lifestyle.
The moral of the story was that both have their pros and cons, but mostly they all agreed that if you want to be a freelancer, it’s a great plus to have some in-house experience first, and not start on your own right out of uni.
3 pm – Assassins Creed Syndicate: Sonic Navigation & Identity In Victorian London
Next on was Lydia Andrew, Audio Director at Ubisoft Quebec.
She explained how they focused on the player experience through audio in Assassins Creed Syndicate, and collaborated with composer Austin Wintory to give an immersive, seamless soundtrack giving identity to the universe.
They were careful to give a sonic identity to each borough of Victorian London, both through sound (ambiences, SFX, crowds, vehicles) and music. They researched Victorian music to suit the different boroughs and sought the advice of Professor Derek Scott to reach the highest possible historical accuracy.
Very detailed presentation of the techniques used to blend diegetic and non diegetic music, given by a wonderfully spirited and inspiring Audio Director.
4.15 pm – Dialogue Masterclass – Getting The Best From Voice Actors For Games
Mark Estdale followed with a presentation on how to direct a voice acting session, and how to give the actor the best possible context to improve performance.
Neat tricks were given, such as the ‘Show don’t tell’: use game assets to describe, give location, and respond to the actor’s lines. For instance, use the already recorded dialogue to reply to the actor’s lines, play background ambiance, play accompanying music, and show the visual context. Even use spot effects if the intention is to create a surprise.
5.15 pm – Stay On Target – The Sound Of Star Wars: Battlefront
This talk was outstanding. Impressive. Inspiring. Brilliant way to end the day of presentations. A cherry on the cake. Cookies on the ice cream.
You could practically see the members of the audience salivating with envy when David Jegutidse was describing the time he spent with Ben Burtt, hearing the master talk about his tools and watching him play with them, including the ancient analog synthesizer that was used to create the sounds of R2D2.
Along with Martin Wöhrer, they described how they adapted the Star Wars sounds to fit this modern game.
They collaborated with Skywalker Sound and got audio stems directly from the movies, as well as a library of sound effects and additional content on request.
In terms of designing new material, they were completely devoted to maintain the original style and tone, and opted for organic sound design.
What this means (among other things) is Retro processing through playback speed manipulations, worldising, and ring modulation, like they did back in the days.
It was a truly inspiring talk, giving a lot to think about to anyone working with an IP and adapting sound design from existing material and/or style and tone.
The day ended with an open mic calling back to the table Todd Baker, Lydia Andrew, Rebecca Parnell, Chris Sweetman and Mark Estdale to discuss the future of game audio.
Overall an incredible day where I got to meet super interesting and wonderful people, definitely looking forward to next year!! 🙂