Interviews
Improved Tech puts Greater Responsibility on Content
Barney Pratt, Audio Director at Supermassive Games, took time out of his hectic schedule to talk to Behind the Glass about his work. His recent project, House of Ashes, is the third installment of the Dark Pictures franchise. Other titles include Harry Potter, Rush of Blood, Until Dawn and of course Man of Medan and Little Hope.
Whilst working on a Harry Potter movie, Barney was receiving emails from EA asking for sound effects of key moments. Diligently obliging served him well. “When EA was under pressure from Warner Brothers to make the Harry Potter games more cinematic, I got a call from my soon-to-be audio director back then who asked if I would be interested in a dialogue designer role on the franchise. I was interviewed, got the job and never looked back.” Whilst at EA he was involved with two Harry Potters, a Trivial Pursuit and dialogue pipeline innovation resulting in a tool that is still in use today at Bioware and DICE.
His work at Supermassive has been massively varied with different engines, platforms and audio experiences. “We have developed games for the PS Move and launch titles for PS VR called Rush of Blood and Tumble VR. I then moved into more cinematic / narrative driven projects where one of our biggest selling games Until Dawn created opportunities to develop a more cinematic approach. More recently I have been continuing cinematic work on the first three of our Dark Pictures Anthology titles, Man of Medan, Little Hope and House of Ashes. We are already working on the fourth in the Anthology with lots more planned. There is always so much going on at Supermassive and we are always looking for ways to innovate on each platform or format, so watch this space for more projects soon.”
So how does he approach a project? “A very well-known film sound designer said something to me many moons ago and I keep referencing it – ‘look for the opportunities’. As a high-level ‘elevator pitch’ it’s about information, understanding, playing to the team’s strengths, identifying areas for innovation, developing the unique project style and of course working within the proposed schedule. It’s important to identify the core of the game design as early as possible; it’s USP, the reason why that story is being told. We pour over the best information we have depending on the stage of development – design docs, storyboards or prototype levels, and start conversations with the game director and lead designers to grow the understanding. We look for key narrative themes, characters, locations, and wider narrative story arcs that we need to follow to help emphasize the narrative and define the style.”
“We are making large projects, so we need to be streamlined in terms of tech and process, and since we are making these projects in relatively short timeframes we have the benefit of making cross-franchise decisions quickly –what links we might want to create between different stories or characters, what to change to keep it fresh, or what to leave exactly the same to make something familiar or identifiable for the players. The stories are so different for each of the Dark Pictures that it is my responsibility to develop a similarly unique soundscape to fully leverage these opportunities. Each title embodies different horror genres and sub genres which in turn lend themselves to different audio expectations, so we make sure to create a completely different aesthetic for every game.”
Which project has been his most challenging? “That’s a tough one; every project has a little maze of challenges and opportunities. Our last game House of Ashes is fresh in my mind and had some key technical challenges around spatial audio - it was the first game we had gone ‘full spatial’, volumetric emitters, emitter and reverb portalling, diffraction, directional early reflections. It sounded amazing, perfectly suited to the caves and tunnels, but of course this added processing came at a cost to the CPU budget and with such a busy high action game, multiple characters and creatures, we knew it would be pushing the audio thread pretty hard.”
“To manage the increased CPU load we developed various culling and prioritisation systems, allowing us to maintain quality where it was most needed while sacrificing processing elsewhere. We added flexibility within the spatial system to fluctuate its quality, to activate and deactivate certain aspects, all with the aim of keeping maximum quality throughout and of course seamless in terms of presentation to the player. It was a real team effort and with the atmospheres getting so many positive reviews in the press - the team did a great job. More recently of course, like many developers, we shipped games during the pandemic and its associated lockdowns. We had to respond very quickly and the audio team really came together introducing new tech and processes to help reviews and remote working. Testament to those innovations in practice was a NAVGTR audio award for Little Hope, a game mostly developed during the pandemic.
As for kit, we asked what his favourite is. “It’s hard to pin it down to a single piece of kit. Commercially the progression in some core tools such as Wwise, Unreal and Reaper have been great for us. In terms of internal tech we spend a lot of time refining processes and pipelines, and there’s a couple of tools that automagically add huge swathes of audio into the game with precision, which of course are loved by the team. Without opening up the eternally subjective debate about headphones and speakers, you need monitoring you can trust, that translates well to many other audio end-points and that wont cause downstream issues for the mix.”
When creating cinematic audio for games there are obviously various challenges. “The key differences between third-person cinematic and non-cinematic games are the cameras, primarily the extremities of the cameras, their position, distance, depth of field etc. Our games include follow-cam during exploration and cinematic cameras throughout sequences, gameplay, and exploration. The variety is amazing, from a close-up wide angle to a long lens voyeuristic shot and everything in between. The design and camera team have the flexibility to change these cameras at any time, which is essential to improve the scenes throughout development. Of course chasing these changes would be near impossible, so over time we have developed systems that are 99% resilient to the changes, more often actually enhancing the visual intentions with the resulting audio attenuation.”
So what challenges did he have for the sounds of the creatures in House of Ashes? “They were a fantastic challenge indeed! Our character foley systems, amazing as they are, are reliant on various things, such as human form, upright bi-pedal, and not too much torso movement. We were presented with something that had added joints in the limbs, bi and quadra-ped, wings, and were animated using combinations of mocap and keyframe, essentially completely fluid free moving living entities. We adapted the existing systems, but they would only do so much, and the creatures looked so good we had to find a system that would respond to each subtle nuanced twist and movement, in any direction or angle and from virtually any body-part. The bodies are millennia old, dry husks that would crunch, pop and scrape with every move. At the same time we were looking at the creatures, we were expanding other procedural foley sounds, and when we applied this new tech with the correct sounds and parameters it really brought the vampires to life covering even the smallest movement and all procedurally.”
What techniques has he developed to support and maintain player immersion? “So many! Immersion is at a high level the conscious flow of the game, the correct emotional drivers in the music for example, and on a deeper subconscious level it is believable and consistent characters that draw you into the story, natural sounding procedural character foley systems responding to the environments, and natural sounding well mixed dialogue that maintains some dynamic range from the performance, quiet to draw you in and loud to assault the ears. Consistency of audio direction is also key to immersion. You must have the entire team pointing roughly in the same direction, sharing assets, maintaining style, working from the same palette of sound. Any member of the team can push that direction into new areas, but it must be folded back into the team otherwise you end up with sections of the game that just feel different, and that can easily break immersion.”
“We’re careful to smooth off any sharp edges, at least the ones we don’t want, achieved through a solid surround mix pass at the end to bring everything together and ‘bed in’ any stray elements, through specific technical advances such as the ‘50% centre bias panning’ which is well documented, since first presented it in 2017. This originated as a solution to delivering cinematic consistency in the surround mix with choppy character sound over camera cuts, while not compromising too much on gamers expectations of sound location in an FPS for example while adding life to the mix.”
When creating dialogue, it’s important to deliver a natural feel. “This starts at source – naturally interacting actors is vital, choosing the mics that translate best to reduce processing is also helpful. For the mastering, we need to constrain the dialogue within a loudness range, but we include variation within that range to allow for as much of the performed dynamic range to be maintained. For example, if the actor goes low in level and draws the player in, we maintain that in the performance and mix the rest of the game around that delivery.”
As game tech moves forward, improved tech puts a greater responsibility on the content, systems and processes. “Console changeover periods are always exciting. We’ve seen this before with the transition to PS4 / Xbox and it’s great, but whereas the buzz was previously around CPU and memory budgets, this time round its pretty much exclusively around features. With dedicated processing and super-fast SSD’s, many of the constraints have been lifted and the conversations have been around how we can best leverage these opportunities with features. These are all vital constituents of the audio pipeline and process is often overlooked, but in making process more dynamic for the team, you not only save time and money but you give the team more time to focus on quality better results and more enjoyment. That said, with global semi-conductor shortages combined with additional production and supply issues across the board, I am not sure that the transition to next-gen this time round will be as fast and we will need to be very conscious of current gen technical needs for a while longer.”
Spatial audio is a must for any project. “This takes a little bit of nurturing on current gen, but the results are amazing and for next gen it should be a case of ‘release the beast’. 3D audio in the form of first-party or third-party products is very exciting especially when you consider there a lot of players on headphones, and there is a more mature attitude to 3D audio at the moment, correcting a previous sense that it was the one-stop-shop for marvellous audio, treating it with a little more reverie to only use it when it helps, or as a tool to help mix the soundscape.”