Wednesday 9 September 2009

Interactive Mixing Project

Over the summer I have been working on my final masters project which is an investigation into interactive real-time mixing in games. The two main issues I have been looking at are;
-the potential for using film style mix techniques in games,
-what extent can currently available middleware tools implement interactive mixing.

There has already been a certain degree of interactive mixing done in games, even back on PS2 games such as Scarface and MGS3, and especially more recently with the sophisticated mixing systems of 'HDR audio' in DICE's 'Frostbite' engine and Radical's 'AudioBuilder'.
Yet it seems there is still a lot for games to achieve in terms of fully utilising the mix ideas and techniques that have been proven to be effective in film sound, i.e. use of focus, contrast, silence to express emotions and subjectivity of the character/s [1].
It can be very important and useful to develop intelligent mixing systems in games like 'HDR audio' that ensure a mix doesn't become muddy. Perhaps though, games could also benefit from developing the use of narrative / event related mix moments that employ mix decisions based on emotional subjective reasons[2]...
There are many great films that have unique mix moments that can enhance the narrative, story, and dramatic effect - but I think part of the reason the mix moments are so effective is because they are unique within the film. Always a great example is the tiger scene in Apocalypse Now.
So maybe one way games can learn from this is to have some mix moments that are unique to particular points in the game so they can take advantage of the narrative, story, and character development. The mixes would still be interactive because they would be triggered by the player's actions and also feed off of game variables.
This could not only help to diversify the audio landscape, but also add another string to the bow of audio in games helping to enhance the overall experience.

So from this notion of working mix moments into games I have been putting together some examples.
I first identified mix techniques used in films then looked for scenarios in games where similar mix techniques could be applied. I then captured game footage with which to create a guideline mix. This involved spotting the footage and adding the different sound stems to re-create how it would sound in game. From this point I could then play with the mix and add automation to set out how I wanted it to sound in game.
I then tried to implement these mix moments in game using audio middleware tools.

Here are some of the guideline mixes I have made. I have included the version without the mix moment first for each example to show the difference.

This first example is a scenario from Doom3, it shows the player going into a room with enemies lurking in the darkness. The mix makes use of delay and reverb to characterise the enemy sounds through the character's POV, helping to emphasise emotions of fear/paranoia etc.

Mix features OFF...


Mix features ON...


----------------------------------------------
The next example is also a scenario from Doom3, the player has to drop down into the room and at some point a monster might make an appearance. The mix makes use of (relative)silence leading up to the event to provide a dramatic emphasis through the contrast created. Again filtering the sound through the character's subjective POV.

Mix features OFF...


Mix features ON...


An interesting issue raised by Simon Ashby, product director of Wwise at Audiokinetic, in Rob Bridgett's article, is that difference in styles of gameplay between a cautious player and an extremely bold player for example will produce different results in the soundtrack. So this needs to be taken into consideration when creating an interactive mix, especially if unique mix moments based around the narrative are being developed. There is a balance that would hopefully be struck, some mix moments could be tweaked so that they work for nearly all types of player. Perhaps it may be a fact though that the intended mix may not be realised in game 100% of the time, on the flip side however, this could work as an advantage because if the player plays the game a second or third time the difference in the mix may help give a varied experience. One thing not to forget of course is that the player always needs to hear the required information, i.e. voice instructions, and also that the mix never becomes a sonic mess - unless it's for a desired effect that is.

So here I have an example of the silence scenario again but with footage of faster gameplay around when the mix moment occurs. The previous example showed the player exploring the dark area that had been dropped into. This next example shows the player running straight through into the lit room.
Hopefully it demonstrates how this mix moment would work for different styles of play.

Mix features OFF...



Mix features ON...


----------------------------------------------
Some other links to interactive mixing and subjective game audio type stuff....
Recreating Reality
The Future Of Game Audio
Game Audio Pro
Subtlety and Silence
Ducking

Wednesday 17 June 2009

Dynamic ambience in 'Prototype'

I have yet to play the game, but from what I've seen and read I can't wait to play it!

The Gamautra feature explaining how the sound team at Radical went about creating and implementing the ambience in Prototype describes some useful interactive real-time mixing techniques and offers insight into the workings of their proprietary audio tool AudioBuilder.

One section that particularly interested me is how they use 18 channels of ambience simultaneously streaming from disc and dynamically mix the levels of individual tracks or groups depending on game variables - more specifically the density of pedestrians, infected hordes, and traffic building up around the player.
The system also uses these density values along with the position of the sounds relative to the player to dynamically mix the ambience among the quadrophonic channel set-up to help provide a sense of orientation.

This interactive mix of the individual ambience elements seems to be entirely reactionary to real-time variables. So, though I'm sure it provides an immersive ambience that feels alive, I can't help but think it would be nice to override this for events or moments in the game. For example, it could be effective to be able to duck all ambience apart from the infected hordes at certain points in the game to achieve a dramatic subjective type moment.

The overall ambience is bussed to the main mixer system and does allow for control of the ambience bus within the overall mix to apply filtering for cinematics and special game modes. However it would have been pretty cool to have the control to exclude a certain element of the ambience that is filtered, giving the ability to inject that extra bit of emotional delivery when neccessary.
Perhaps the question remains, would this be apporopriate to the gameplay or the character and story? I'm not exactly sure as I haven't played it but it would be interesting to see if that extra level of flexibilty in the mix could intensify the cinematic qualities in the gameplay and possibly even help provide more variation in the soundscape...

All things considered though it's great to see a complex and intelligent mix being implemented and also described so openly to the world!

Monday 8 December 2008

sixense motion controller

Not sure if people have seen this before but I thought I would post a link to it anyway as it looks to be rather good!
Its a fully 3D controller using a position and orientation tracking system that seems to put the Wii remote to shame.
The controller should give a greatly higher degree of interactivity and the gestural controls that would be possible using sixense are quite exciting!
If it takes off it could be used in many aspects of gaming and interactive music.
I don't reckon much to the sounds on the tech demos though lol!

Check it...
http://sixense.com/

Wednesday 3 December 2008

Interactive Music

Interactive music can be related to various terms such as adaptive, reactive and dynamic. Karen Collins describes dynamic music to cover interactive and adaptive music.
The main idea is to support the gameplay / action and communicate the story and emotions of the game so that the music will react with the narrative, game environment, non player characters as well as react or interact to player actions and decisions.
Doing this successful can be very tricky due to the non-linearity of games making the length of gameplay sections undefinable and the exact decisions a player makes upredictable. Ideally an interactive music system will be able to adapt to the undefined events that occur in a game as well as the varying parameters that occur such as health and status. On top of this the music needs to be interesting and maintain musical integrity, as well as serving the interactive needs of the game.

I decided to look at 'No One Lives Forever 2' and try describe / illustrate how the interactive music sytem works.
It seems to be mainly based around horizontal re-sequencing with some elements of varying layers the level shown below has 3 modules for various intesities of gameplay -
  • A continual module that contains various layers but is pretty much a loop, this plays when you are in the level but not interacting with anything
  • An alert module which raises the tension when you are spotted by an enemy
  • A combat module that is triggered when you engage in a fight with an enemy
Each module will probably have defined points when it can transition into another module and when a transition occurs it will wait until it gets to one of these points. This helps to maintain the musical integrity but in turn doesn't always support the action perfectly.
There seems to be some transition sections that only occur when a transition takes place. There is very little cross fading between sections as the musica has been written with phrases that can switch between modules and enter transition phrases nicely.
The continual module plays until you either alarm the enemy of your presence or fully engage in battle. When alarmed of your presence the music will transition from continual to alert at the next transition point then do one of the following;
  • If you are not physically in view of the enemy the alert music will play for a length of time then go back to continual.
  • If you kill the enemy before a certain period of time the music will transition back to continual or play a short phrase and go back to continual
  • If you are in view of the enemy but do not kill them before a certain time has passed then the combat music will commence
When you are engaged in combat the combat music will generally play until all enemies in range have been killed then transition at the next transition point back to continual or sometimes alert. The combat music sometimes continues even though there isnt an enemy within close range. I can only guess this is to do with some kind of alert parameter.

Wednesday 12 November 2008

Kinetic algorithms - Skate

The kinetic algorithm I have chosen to look at is the turning mechanism in the skateboarding game 'Skate'.
When you are travelling at a fair pace in the game and use the analogue stick to turn the skater left or right the sound of the skateboard changes slightly depending on how much you turn.
When this happens the sound seems to get louder and possibly some change in pitch. There may also be some kind of filter going on, possibly a low pass filter.

The amount the sound changes seems to be directly related to how much movement is applied to the analogue stick but it also seems to be affected by the variable of how fast the skater is going.
The variable of what terrain the skater is on may also affect how the sound changes when the player turns the skater.
If the skater is going under a certain speed or the amount of turn applied is not above a certain threshold then there seems to be very little or no sound at all so maybe the algorithm only functions when the speed or turning amounts are above certain values.

The best example of this is at 7:15

Tuesday 4 November 2008

SoundSeed - A New Hope

My blog this week is not quite a new innovative use of interactive audio, more a new innovative tool for interactive audio.

SoundSeed has very recently been announced by Audiokinetic and it certainly seems to be an innovative way get more varied sound from very little memory useage. Thus giving the ability to creatively push the limits of game audio.

SoundSeed is a family of sound generators, developed by Audiokinetic, that can generate unlimited variations of a single sample using DSP technology.
There will be different modules for SoundSeed based on different types of sounds, the first to be released is Impact. This module deals with impact sounds such as footsteps, sword clangs etc.
It works as a plug-in for Wwise although is actually split into two parts- a stand alone modeler and a plug-in.
This is because the workings of SoundSeed Impact is based around analyzing the sound file and splitting it into two parts- a residual sound and a parametric model.

The modeler analyzes the 'resonant modes' of the sound and extracts this data in order to output a parametric model that can recreate the original and also be modified to create variations of the original sound.
The modeler also outputs the residual sound which is the original sound but with all the resonant values removed.

The Wwise plug-in part of SoundSeed Impact uses the modified parametric model in conjunction with the residual sound to produce a completely different sound!

What's also exciting is that SoundSeed can control parameters of the sound based on game elements such as physics or artificial intelligence!!
It also has a function to control the quality of a sound meaning you can modify the runtime CPU usage based on the importance of a sound or the distance of the source from the listener.

Here is one of the only examples available from Audiokinetic of what SoundSeed Impact can do;



Not quite as impressive as one might of thought but I'm sure the actual implementation of it will produce some great results and game developers Realtime Worlds are using the system in their most recent game APB.
Realtime Worlds report that they are really happy with SoundSeed Impact so far and it is delivering on the promise of reducing the memory footprint.

Info from;
http://www.audiokinetic.com/
http://www.developmag.com/news/30708/Audiokinetic-unveils-SoundSeed

Wednesday 29 October 2008

Sound Propagation




Sound propagation is how sound waves spread from where they are emitted and how they react with the environment they are located in and any other connected environments which have different properties.



The propagation of a sound mainly refers to how it is reflected, refracted and attenuated by the environment and the objects around it. These elements of propagation affect the characteristics of a sound and the better we can emulate these elements of propagation in computer games, the more realistic and immersive the games will become.







This diagram from <http://hyperphysics.phy-astr.gsu.edu/Hbase/Sound/sprop.html> gives a basic illustration of the phenomena involved with sound wave propagation.



















From study of sound propagation I have become particularly interested in interference, refraction and diffraction of sounds when relating to occlusions, obstructions and exclusions.
Also how this affects environments adjacent to eachother and the transitions between them.


This is because it can help to provide such better gameplay in relation to;
  • Player feedback
  • Reinforcing the visuals
  • Environment/ player interaction

The overall outcome is vastly increased gameplay and immersion.