Public Speaking, etc
At the Game Connect Asia Pacific conference in 2018, Zander co-presented a talk with Jeff van Dyck: Boom Box – Audio Workflows Inspired by Constraints. They each spoke about various tips and tricks they have figured out in their respective careers, from pre-production and organisation workflows through to mixing and implementation.
GCAP Loading 2017
Leading up to Games Connect Asia Pacific, the organisers run a one-day symposium for students and aspiring game developers called GCAP Loading, at which Zander was invited to speak. He gave a talk entitled Why Does My Game Sound Like Balls? which covers a lot of common audio problems that he has helped various indie developers overcome in the past. Anya McNaughton supplied the drawings of cute bunnies that help illustrate the talk's points.
Heart Beat 2018
At the inaugural Heart Beat Symposium at Byron Bay, Zander was invited to give a talk about the sound of sex in video games, entitled Intimate Performances – Approaching the Sound Design of Sex & Romance. The presentation looked at how sex and romance sounds in different genres, and how to leverage generic expectations to help elicit the desired emotional response.
Gamasutra blog – Guide to Optimising Unity’s Audio Import Settings
Frustrated at the lack of in-depth documentation on the effects of Unity’s audio import settings on performance, Zander wrote a comprehensive guide to optimising these settings for different types of audio in various circumstances. Gamasutra featured the blog post, which you can read here.
GAME – The Italian Journal of Game Studies
For Italian journal of game studies GAME, Zander wrote a paper entitled Killing-off the Cross-fade on his research into "imbricate audio" – a method of creating and implementing interactive music in video games. The proposed method has been used in a few launched titles so far, including Sling Kong and Oopstacles, and is the basis of a procedural music system for a new, unannounced project.
In short, imbricate audio can perform some musical transitions with recorded (or pre-rendered) music clips that could previously only be achieved with sequenced audio data such as MIDI or Mod/Tracker. You can read the paper in full, here.