Table of contents
Hi! My name is Wouter Meermans, and the purpose of this log is to document the work I’ve done for the “Enschede Weaving Factory” and “Tweeduuster” IMT&S projects. I’ll walk you through the process.
My main role in the project was as a technical sound designer. The plan was for me to create and implement the sounds for both projects, as well as design the sound systems to immerse the players in the experience.
The Enschede Weaving Factory project’s aim is to showcase the conditions in which people worked during the 1920s-1930s in weaving factories.
The project is being passed down from last year by Andreas Ioannou, Kevin Nijkamp, as a part of the IMT&S program, providing our team with a solid starting point such as animated looming machine models, worker models, and research. Based on their work, our team is expected to develop a VR experience for MuseumFabriek.
The final product will be featured at the museum to give visitors a historically accurate and immersive view of working conditions during that time.
The client for this project is Rob Maas, who acts as the primary point of contact between the team and the ultimate client, Edwin Plokker, curator of the MuseumFabriek.
The team consists of ten members:
Name | Team Role | Specialties | Portfolio |
---|---|---|---|
Carolina Bertoncello Machado | Team Lead Designer | Narrative Design, Level Design, Documentation | https://carolinabertoncellomachado.carbonmade.com |
Chris Peters | Artist | https://www.artstation.com/chrispeters99 | |
Faried Elawady | Team Lead Artist | Technical art, Asset creation, Project overview | https://www.artstation.com/premadness |
Jelle Boelen | Engineer | Code, Tooling & Infrastructure | https://technicjelle.com |
Julia Calis | Designer | https://drive.google.com/drive/folders/1zKoCgDTN2XUJJPHngHBNT_iBt4VUYmEd | |
Nataliia Sviridenko | Team Lead Designer | Research & Narrative Design, Sound Design, Documentation | https://sviridenkogamedesign.wixsite.com/nsviridenkogamedesig |
Senne Oogink | Artist | Character Art, Rigging, Animation | https://www.artstation.com/senneoogink1 |
Sorana Verzes | Artist | https://soranaverzes.artstation.com | |
Stephanie Extercatte | Artist | Unreal Engine, general 3D art | https://www.artstation.com/temaeya |
Wouter Meermans | Designer | Sound Design, Coding, UI/UX | https://wmeermans.netlify.app |
The Tweeduuster project is a project I joined around week 9 of IMT&S. I was not involved in the Emphatize, Define, and Ideate Phases.
So, what is “Tweeduuster”? According to LokaalGelderland (2023) “Tweeduuster” means “twilight” in the Twents dialect. Their team is working on "Tweeduuster", a project aimed at bringing to life old myths and legends from around the Twente area in the Netherlands. Stories such as "The White Witches of the Tanken Hill" and "The Nun of Singraven" sit at the core of the experience we are trying to create. You are one of the last witch hunters in Twente’s area. You were researching the clue of Hengelo’s werewolf. However, you heard from the scared owner of the old manor information about a ghost nun who is scaring people. He paid you to get rid of the paranormal creature as fast as possible. Your goal is to discover the truth. This game will be an open world with puzzles with a linear story, but you can solve puzzles in different orders. The player will face not only the terrifying series of events and the fear but with his internal conflict.
My specific tasks for this project were to handle the technical sound design and to record, edit, and refine the voice lines envisioned for it. I communicated with the team primarily through one of their designers, Angelina, and their engineer, Freek.
For this project, our team is following the classic design thinking method: Empathize → Define → Ideate → Prototype → Test. This approach has been extensively utilized throughout the project, allowing us to refine our ideas and processes as we progress.
Although I did contribute to the overall empathize and define phases, I want to emphasize that the majority of the work in those stages was carried out by the other designers on the team in terms of documentation.
Learning goal 1:
During IMT&S, I will improve my ability to design immersive sound effects and ambient sounds by creating or enhancing at least 10 unique audio elements for multiple projects. By the end of IMT&S, I’ll conduct tests to evaluate the immersive quality of my sounds, comparing them to real-world and game references, and refine my approach based on feedback and accuracy. This will enable me to convey environmental soundscapes with greater authenticity and impact. I’ll be doing sound effects for Weaving Factory project and Tweeduster folklore project.
Learning goal 2:
As a technical sound designer, I will become proficient in creating and implementing sound systems in Unreal Engine using Wwise by the end of IMT&S. To do this, I will complete at least two Unreal Engine sound design projects that incorporate Wwise-driven sound systems, including ambient sounds, effects, and interactive audio. I will also demonstrate my understanding by teaching a teammate/peer how to implement sound systems in Unreal Engine. Success will be measured by my ability to confidently explain these processes, as well as by feedback from peers on my ability to explain the process.
Learning goal 3:
As a programmer and game designer, I aim to become proficient in navigating and using Unreal Engine 5, specifically the editor, blueprints, and basic scripting for the duration of IMT&S. My goal is to confidently create and implement basic features, mainly sound-related, for our project with minimal assistance from our engineers. This proficiency will enhance my ability to contribute independently to projects developed in Unreal Engine, making my skills more versatile and valuable to companies.
I adapted my learning goals halfway through the project to better align with both my personal needs and the requirements of the two projects I was working on. However, I would like to highlight one more adjustment I made during this time.
While working on the IMT&S project, I conducted several recording sessions that I believe will be incredibly valuable for my future professional career. Recording voice lines was not part of my original learning goals, but as I progressed, I realized I hadn’t dedicated enough time to working with UE5 blueprints. As a result, I shifted my focus toward recording, editing, and refining the voice-line sessions I conducted.
Week 1.2
In the first days of the project, the team came together for a meeting with Rob Maas to begin outlining what the client expected from us. He made clear that the museum wanted provide the visitors with a historically accurate experience that conveyed what it was like to work in a weaving factory during the 1920s and 1930s.
Since this was a continuation project, there was already quite a bit of work completed. We had a factory environment, weaving machines, and an animated character model. However, Rob Maas pointed out that everything still felt overly robotic; all the characters were in sync, creating an effect that resembled an orchestrated dance rather than an authentic representation of factory life.
From there, we started identifying our target audience and considering the type of interaction we wanted to provide. Most importantly, we focused on how to make the experience as historically accurate and engaging as possible.
Week 1.4
When we started to think of what we might want to make, we sat together as a team and started coming up with ideas. We created a total of 10 concepts some closely related and some very different from the rest. 3 concepts stood out the most:
Part of the team made a pros and cons list and presented them to the client to come up with a final concept
Pro-con analysis made by Faried Elawady, Nataliia Sviridenko and Carolina Bertoncello Machado.
We decided to proceed with the Points of Interest (POI) concept. In this VR experience, users experience the world as a 1930s weaving factory worker and find micro-stories as scattered around the factory. These stories are triggered at specific POIs and the players is drawn to them with sound queues with and showcase moments such as a worker repairing a machine, workers interacting with each other with sign language or a supervisor monitoring the workers.
The experience would rely on motion-capture animations, Dutch voice acting, and a strong focus on historical accuracy, both in the movements of the characters and the depiction of working conditions from the era. Designed to last 3–5 minutes, the experience prioritizes immersion and visual storytelling over interactive gameplay, creating a realistic, atmosphere but not a game. Users would be free to explore at their own pace, with the environment subtly reacting to their presence.
This was our initial idea of the chosen concept, but throughout the project we realized that we had greatly over-scoped this and what was left of the final idea. This had some impact on my personal tasks since the microstories were not going to be part of the final experience anymore so the need for specific sound queus, and the voice acting for the Weaving Factory project was not necessary anymore. The version we ended up with is an VR experience where the player can teleport around to see various parts of the factory, alot like the 3rd concept actually.
Throughout the project (week 1.5-week 2.6)
Both of the Tweeduuster project and Weaving Factory required me to add sound effects to events and ambiences. The Weaving Factory I had a slight different approach then Tweeduuster, because of what the client wanted for the project, and the fact that one was in VR (Weaving Factory) and the other was a regular 3D game.
For this project, the client wanted it to sound like a real weaving factory. So, as I mentioned before, I went to the factory to record the machines. Along with that, the previous team had some recordings done as well, and I combined both to create the general ambience of the weaving machines, leather bands, and other sounds, such as people coughing because of the dust and a train moving by occasionally.
I first started off with the general weaving machine and had a few approaches I wanted to try. I wanted the sound to change drastically depending on how far you were from the machines, as I had noticed this effect in the museum itself.
To achieve this, I recorded various versions in the museum, which allowed me to have options to work with. The previous recorder had also captured several versions, which turned out to be very useful. These were some of the recordings:
They were very similar but useful for when you want to mix and match different variations.
Other stuff like train and coughing noises I found online mostly on freesound.org or the Sonnis GDC library I was thinking of recording some coughing noises myself, but found that recording them with my own equipment was too direct. It sounded like you were right in front and being too far away needed me to turn up the gain too high and in the end the quality would not benefit from using my own. This project did end up not needing that many sound assets so I was very glad that I could also work on the Tweeduuster project that had more to do. Implementation however did prove to be more difficult and other technical issues put my development on hold at certain times, but I will write about this later in the document.
Weeks 1.5-2.9
I wanted to use a blend container to smoothly transition between these sounds but this proved difficult because the samples differed so much from each other. here is an example of what that roughly would have sound like:
Test_Weaving machine ambience.wav
I was not happy with this overall so I ended up making one general ambience, that would play so called “2d” in the factory which you could hear everywhere and making every machine make their own noise offset from eachother so the sound of each machine was distinguishable from one another.
Carolina asked me to come up with a system to change the weaving machine sounds when they were being looked at. The idea was that, at some point, we could use this system to focus the player's attention on events happening in the scene. This is what that ended up sounding like.
You can hear how the focus on the individual machines increases in this recording, which was done within the Wwise program. Although we didn’t end up using it, it still took me some time to get it right. sorry that the volume is somewhat low, that seems to happen with screen recording.
I created a test project in Unreal Engine to see how I could at least get sound playing from an object and check if I could easily make that sound spatialized. I imported the FBX file for the weaving machine and created a simple blueprint to get the sound running. I wanted to use Wwise in this project to have an easy space for creating mockups and testing ideas without needing to open the larger project every time.
One of the early mockups I made for the project was the ability to hear the shooters (a part of the weaving machine) separately from the rest of the machine.
I wanted to do this, because when we were in the museum, the machine was obnoxiously loud, and most of the sound was very generalized as just the big overall sound of the machine it was somewhat painful even. What I noticed instantly howevers is that the shooters, a specific part of the weaving machine, were louder and very distinguishable from the rest of the sound, and especially the location of them, so made a small blueprint script for this in the test project that played these sounds on fixed intervals. Aligning them with the machines would have to be done a different way because otherwise the sound would mismatch and finding the exact interval would require alot of trial and error so I found a different solution. This is something I wanted to implement in the real thing with using notifies as the source that triggers the sounds. But that I’ll explain in the Unreal section.
Something I really liked about Wwise was the mixing desk, I had used this a little before but now I wanted to test it out a bit more.
What it allows me to do is, over the network, connect this live to Unreal Engine and adjust the volumes, effects, pitch, etc., of the busses, as well as each sound individually. Since it’s hard to play multiple sounds at once without turning it into a linear piece of media that repeats, it is very handy for mixing in-game audio. You can see what is happening live with all your sounds and adjust accordingly.
This is what the finished ambience sounds like. I had hoped to connect the coughs to the animations of the Metahumans, but due to time constraints, I didn’t have time to figure that out because the metahumans were only implemented at the very end of the project. One thing I did add last minute was the teleportation fade-in and fade-out for the sounds. When testing the VR set, I noticed the abrupt change in volume and ambience when moving, so I fixed that fairly quickly with a Wwise event. The event was set to turn down all the volume → wait 1.5 seconds → and then turn all the volume back up. This was implemented entirely in Wwise, apart from adding the callback to the event in a blueprint.
Here you can also hear the audio fade out and fade in when teleporting
Throughout the project around week 1.6 on and off up until week 2.9
I only did the most basic work in Unreal Blueprints because the weaving factory project didn’t require me to create very complex systems. When more complex systems were needed, they were either handled by our engineer or by Faried. That said, I did manage to learn the basics while working alongside Faried, Daniel (the Unreal guru), and another designer who was not part of IMT&S.
I first wanted to make sure that every machine that was spawned with the blueprint that created the machines would have sounds, this proved to be more complicated and since me and Stephanie were both working on the same script. It did come with a few hiccups at the start, but we did work those out in the end. This was the part of the blueprint for the weaving machine that I worked on, it did go through a few renditions, for example adding this manually to each weaving machine in the spawner blueprint, but this seemed to be the best and most efficient way of doing it.
I had a lot of difficulty even getting the sounds to play in VR, even though I had created a VR template project. It seemed to work differently compared to when I was doing it in my test project. Sounds weren’t spatialized properly or weren’t playing at all. The problem was most likely due to the fact that we had multiple “playable” characters in the level, or perhaps I didn’t entirely understand how the AKGameObjects were supposed to be used. I was more familiar with Unity, and to be honest, everything felt a bit simpler there, so I had to adjust my approach. For now, using “get world location” from the AKGameObject seemed to work.
The other approach, by attaching the AkGameObject to the event, seemed to work fine in the Tweeduuster project, but that project was not in VR so it might have something to do with that.
I had also wanted the shooter sounds I mentioned earlier to match up with the animation, in Unity I had done this before and it was easy setup, here it was a little more complicated but nonetheless I did get it to work, for this I used something called notifies which is a action that can be referenced on a specific frame in animation to then later use in a blueprint for example.
One issue however was that the animation we had for the weaving machines, was that the animation was around 3000 frames, which was an ridiculous amount. I had to place the notifies on the correct frame and that was kind of a tedious to get right.
This is an example of some of the frames, the reason I believe the animation was so long was because of the occlusion maps of some of the textures on the gears and I was not in charge of that.
With these notifies I could call functions in the script for the sounds to play at their specific locations, like this: