Hi, I’m Carolina! In this devlog, I’ll outline my progress throughout the Enschede Weaving Factory project as a designer and co-team lead.
The Enschede Weaving Factory project’s aim was to create an exhibit for De Museumfabriek that showcased the conditions in which people worked during the 1920s-1930s in weaving factories. The project was passed down from the previous year by Andreas Ioannou, Kevin Nijkamp, and Trim Vezveja as a part of the IMT&S program, providing our team with a solid starting point such as animated looming machine models, worker models, and research. Based on that, our team’s aim was to develop a VR experience for the museum. The final product will be featured at De Museumfabriek to give visitors a historically accurate and immersive view of working conditions during that time.
The client for this project was Rob Maas who acted as the primary point of contact between the team and the Edwin Plokker, curator of the Museumfabriek.
The team consisted of ten members:
Name | Team Role | Specialties | Portfolio |
---|---|---|---|
Carolina Bertoncello Machado | Team Lead Designer | Narrative Design, Level Design, Documentation | https://carolina-bertoncello-machado---game-des.webflow.io |
Chris Peters | Artist | https://www.artstation.com/chrispeters99 | |
Faried Elawady | Team Lead Artist | Technical art, Asset creation, Project overview | https://www.artstation.com/premadness |
Jelle Boelen | Engineer | Code, Tooling & Infrastructure | https://technicjelle.com |
Julia Calis | Designer | https://juliacalis.wixsite.com/home | |
Nataliia Sviridenko | Team Lead Designer | Research & Narrative Design, Sound Design, Documentation | https://nataliia-sviridenko.webflow.io |
Senne Oogink | Artist | Character Art, Rigging, Animation | https://www.artstation.com/senneoogink1 |
Sorana Verzes | Artist | Animation, mocap, general | https://soranaverzes.artstation.com |
Stephanie Extercatte | Artist | Unreal Engine, general 3D art | https://www.artstation.com/temaeya |
Wouter Meermans | Designer | Sound Design, Coding, UI/UX | https://wmeermans.netlify.app |
In the first phases of the project, we decided to settle for the design thinking research methodology to create structure and flow right at the start. The design thinking model consists of five stages: Empathize, Define, Ideate, Prototype, and Test. We began the Empathize stage by contacting the clients, Rob Maas and Edwin Plokker to better understand the problem statement and the desired outcome of the project. Once the project’s aim was in view, we began with the Define stage, which consisted of empathy maps, managing team roles, writing a MoSCoW model, and defining the scope. Moving forward, we got the team together to ideate concepts for the final deliverable, which we then filtered, voted on, and pitched to the client to decide our next steps. This consisted of Prototyping, which proved challenging in certain points and caused delays in the development process. Lastly, there was the Testing stage, which consisted of a mixture of AB testing and normal feedback sessions with the target audience.
All of these stages will be described in detail throughout this devlog.
Design thinking model.
For the first four weeks, the team spent time researching not only historical facts but also the software we would use for the project. Our engineer got to work on learning Perforce as a replacement for Git, while the designers and artists alike learned how to set up and work with motion capture suits. Likewise, the entire team learned Unreal Engine 5, the engine used for the project. Additionally, Marvelous Designer and Wwise were programs learned during this time by certain members of the team.
An empathy map was created once the target audience was established based on average museum visitor ages. It was decided that the average age was 15-year-olds visiting either with their parents or on school trips.
Empathy Map made by Nataliia Sviridenko and me.
Julia created the following trailer to showcase the factory created during this project.
Trailer for the experience made by Julia Calis.
Trailer for the experience made by Julia Calis.
The final product delivered at the end of the project was a live scene of a 1920s-1930s weaving factory with workers made using Metahuman creator, Marvelous Designer and motion capture suits. The factory is decorated by assets such as lockers, crates, and boxes, as well as the rows of fully animated machines. According to our research, each worker was in charge of approximately four machines, and the factory was built to-scale, meaning the worker to machine ratio shown in the project is as accurate as we could make it. The loud sounds of the machines can be heard all throughout the factory, as well as the occasional cough from workers. We wanted the focus of the experience to be on realistic atmosphere, with heavy fog and low lighting in the factory to represent the conditions the workers had to be in during that time.
Once in the experience, the user’s only means of interaction is through a VR headset. We removed the use of controllers as to avoid steep learning curves and long queues at the exhibit. The user can teleport between teleportation points within the factory by maintaining their eyes on the point they want to teleport to for approximately 2-3 seconds.
Lastly, a trailer was created to showcase the project, including both scenes outside of the experience, as well as some inside.
Due to time constraints and limitations within the project, we were not able to fulfill one of the original goals we set out for ourselves: to create narratives for the users to empathize with workers and experience a day in their lives. This would have been done with the use of the “microstories” created during the project using motion capture suits. These were short (20-40 second) snippets of worker interactions that would have brought more life to the factory. Instead, we only included idle animations for the workers, and opted for a live scene, rather than small, narrative-driven stories.
An idea that sprouted from the microstories was the use of spatial audio to guide the user through the factory. This would include audio cues for when a microstory was about to begin that would have guided the player’s attention to look in the right direction. This was not implemented in the final product, and instead we opted for the sole use of spatial audio when looking at machines; slightly decreasing the volume of all other machines except the one the user’s focus is on.
Another quality of life that we chose not to implement is a settings menu. A scene where the user or museum staff can set teleportation variables such as the time it takes before teleportation is initiated, the fade duration during teleportation, and adjusting the teleport feedback/confirmation. This ensures an adaptive product for the museum to quickly change settings based on visitor feedback and preferences.
A blueprint was set up to spawn a specific character with randomized clothing pieces.
Where we would want to expand on is having the blueprint spawn a randomized metahuman character and then fit it with random clothing from a list of our own created assets. This would add more diversity to the workers and make the factory feel more dynamic.
During the project we designed and recorded a set of microstories where characters within the scene communicated with each other and had conversations using gestures. These gestures were researched in-depth by our designers with the help of the museum, as they were unique to the workers of that time.
Our aim was to create animation blueprints where the character animations would react to other characters to create a story and interactions between NPCs within the scene. A character animation would play after the previous animation was completed. This included an open parameter of an offset to have the animation start earlier or later to create a more natural conversation where the character reacts earlier or with a delay to allow for thinking. Using this, our designers could then create stories and natural conversations between the characters.
As proposed as an extra deliverable, we wanted an addition where museum visitors could inspect the models such as the weaving machines in a 3D model viewer within VR. This would allow them to get up close and personal with the machines and understand their inner workings, further educating visitors alongside the museum’s other machine exhibits.
We also initially planned on having a screen in the queue building up to the exhibits where waiting visitors could view a worker model performing the different gestures. With this, visitors could further understand the gestures and their meanings before jumping into the VR experience, allowing them to test their knowledge with the microstories.
Home | Movella.com. (n.d.). https://www.movella.com/
Zero Conditional. (2021, August 5). Want an easy way to get mocap onto your MetaHuman character and into Unreal? CHECK THIS OUT! [Video]. YouTube. https://www.youtube.com/watch?v=Dimfadg7J5w
Unreal University. (2024, February 14). All unreal engine nodes you need to know about [Video]. YouTube. https://www.youtube.com/watch?v=KE_8OwmHY-A