Cafe Oto Garden Game made with Unity Reflection

After today’s lecture with Ingrid about learning and using FMOD I was finding it hard to understand the exact context of the reasons to use FMOD. To encode the sound and present it to the game’s designers already processed was what I received? I should take some time by myself to experiment more and see the benefits and negative and to do decide if I wish to use this platform.

I do find the middle ground between DAW and Engine to be a strange place to edit and encode. I feel I would rather see the visual stimulation when encoding and deciding the effects of the audio in the space rather than a timeline. Like wise when using effects I would prefer to create and use them in a DAW.

But after playing this Cafe Oto game I have understood a little bit more about the design of sound in a game. Although a game, the premise of it is that it’s an instrument where you run around activating sounds and creating a loop with your character to engage complex patterns. I thought the coding behind how this works to be captivating and engaging for the interactive user. It did have me thinking if they used FMOD and I assume so. Seeing other ways of interactive media with coded sound has given me more context towards my game, and application of my audio within it.

Week 18 Pre Production follow up meeting reflection

Halfway through the lecture, we were given time to speak as groups on the current stance of our group and how we are operating and to plan further for next week’s tutorial. As I and Will already had a group meeting with the MA students before the lecture we were very happy with where we stood in the project. As Jingya didn’t attend we shared the information with her about the feedback to music and the stance on sound effects.

We communicated to Jingya that although the game wasn’t at the right development stage to start deciding on the sound effects, we spoke on the need to continue recording and experimenting on the sound effects regardless. I and Will felt and agreed that we shouldn’t wait for them to tell us what they want, rather we create and receive feedback. The process of creating sounds should be more active than waiting for them to tell us what they need.

We all agreed and decided this week for me and Will to create some music. Jingya will begin creating sound effects and I will also experiment with ambience and sounds with field recordings around contact mics and my Zoom H1N.

I am waiting on the PDF and information we asked for in the last meeting from the MA group.

Whittington, W. (2007) Sound Capture to Construction: Building the Lexicon of Sound Designs for Star Wars Reflection

The article begins by speaking about realism for sound not being the same as cinematic realism. Which is governed by our expectations and perceptions of cinematic wolds and not the real-world experience.  Recording sound effects as well are not just recording but more construction of sounds. Sound for film isn’t about just capturing but performing as ben Burt did for the lightsaber.

The article also compares sound and image construction in unison to be the process of representation and abstraction which I agree with what’s said. It is the balance between what’s seen and the idea of what it could be.

When designing star wars they wanted to make sure the film and its world were reliable, realistic and authentic. Before this most sci-fi soundscapes were using the theremin and other electronic sounds which seemed cliche and took the context out of films. This meant that when designing the sound for this film Ben Burt decided to go against that and make something more realistic and industrial. They wanted it to sound like a “used future”

The sound design in the first scene for StarWars uses the laser sounds to incorporate more than just shooting. It shows a sense of space, temporality and depth/continuity of the environment. You can hear the laser shots you see on screen and others off-screen. Which creates a sense of a world that is alive and exists beyond the limits of what we see.

Ben Burt speaks on his thoughts on field recording. It’s more than recording a source or sound its also the relationship between that sound and the space around it. The air has a lot to do with it. It’s a perfume of sound he says.

Motifs in sound are also added towards the picture, with good vs evil, love and romance, Industrial future. This allowed for sounds to portray these motifs as well. With more heavy mean sounding vehicles using low end and harsher sounds for enemy dark side ships and lighter smoother sounds for the rebels.

 The rebel ships were also sound designed to be jankier and put together like scrap ships through sound and the empire was more high tech, well constructed. 

He also pitched the lightsabers to different tones, Darth faders being a minor key and ben Kenobi’s is more of a C major chord. And when they get together to fight there is disharmony.

With R2D2 they managed to convey emotions and communicate without speech but using sounds and bleeps. Performing with sound the same way a human does with language. The tone, the speed, the pitch, the performance made a difference in understanding R2D2.

Overall this article brings critical reflection on the abilities of sound design to convey messages and meanings. As well as motifs, space, georgarphy. Time, power, struggle. Evil, good. Anything to an extent, and also touches on the ability we have to passively relate that to common contexts in our society. And the balance between abstract and reality can create a buffer zone in which sound design exists.

Wednesday 23rd – Synthesiser session / Creating music for video game

I decided as it was part of my goals to learn synthesis and the modular this term that it also fits in perfectly to the music asked for in this video game. I booked the synth lab and started practising and learning the machine. I began playing on the Minilogue which is very simple and has built-in presets for you to play with. I recorded long play sessions of exploring these sounds onto logic which I will further refine.

I also began watching videos on the euro rack. I wanted to record some sounds that were progressive and ambient. I decided to watch a video explaining the terminology of modular synthesis and what everything means. VCO? VCA? CV? and understanding the fundamentals of modular synthesis.

I began playing around with the Eurorack and deciding what to go and how to do it. I found modular synthesis to be very organic and progressive with the patching. I sort of understood what was happening but it was almost as if the audio was leading me down a path and not necessarily the other way around. I created a few ideas and recorded for around thirty-forty minutes to get a varied range of sounds to use for composition in Ableton for music in the video game.

Here is a photo of a patch of something I made.

Towards the end, I found this Eurorack didn’t quite offer what I wanted and felt like it was back to the drawing board. Perhaps instead of using synthesis as the main compositional tool, I can sample what I’ve recorded onto Ableton and create an actual piece of music that resembles what I’ve been asked to do.

I also discovered VCV Rack towards the end of my three-hour session in the synthesis room. A free open-source modular rack unit to play around with. I will keep using this to practice my skills and to bounce any sounds and patches I make to use in Ableton for my music submission.

After my session, I ended up with some sounds from the Minilogue and Eurorack modular synthesizer. As well as VCV Rack and further synthesis knowledge with a plan for what’s coming. I believe I must now combine what I’ve sampled from the synth bench alongside Ableton and perhaps record some ambient sound design effects? Other skills I’ve learnt from previous modules. And look deeper into sound/music for games knowledge.

Video Game Research – Florence

Florence was one of the names given to use as a reference for gameplay. Not necessarily the music but I felt it was useful to start researching into other games to understand video game music. It’s a whole different ball game to normal conventional music.

I’ve been listening to the soundtrack and it’s very orchestral, perhaps wrong for the style of the game which they spoke about is 80s themes futuristic aesthetic. I did have ideas of recording my housemate Daniel playing his violin as he is grade eight on his instrument. He also plays the piano to a high level and I would simply ask him to improvise over some drums I would have made. But I feel this is perhaps the wrong way going forth into this design. I need to relook elsewhere.

The composer of the soundtrack is called Kevin Penkin, he has composed for a few anime cartoons and I felt this matched the art style perfectly. As well as the themes of the game which is around adulthood, relationships and finding yourself in this world. Very simple but evokes the emotions that the game wants to feel when playing. The music itself isn’t interactive but conveys the message of the game, which Is something I really wish to do with our group’s work. I need to spend time on synthesisers, as since our last session watching Will operate the Moog Matriarch has made me want to attempt it as well.

Second Meeting – 23rd February

Today we had our second meeting. I and Will came prepared with the sounds we had made the day before, ready to showcase and receive feedback on the audio we had made. Again at this point, we have been given very little towards what we should be doing in audio and left to our devices but we have taken it towards a synth-wave, vintage 1980s soundscape.

We showed our music and they enjoyed it thoroughly. Out of all pieces they enjoyed number four, number one and number seven the last one. They said perhaps the other songs were too loud and complex for a video game and had too much going on. But they loved the others and thought that they fitted perfectly for level one.

I and Will agreed, I felt the same but was happy to receive feedback. I have got the synth bench booked later this evening and plan to create a few ideas as well as teach myself synthesis on these machines as I am not comfortable entirely with the process.

Will also said he would create a few more things he had in mind at home, some piano sounds of him playing. We also agreed on some soundscapes to go along with the game. Even though it is a 2d minigame thing going on. It can have an ambience.

The MA games design team went on and explained what has happened since the last meeting. They showed us the updated interface and graphics sent from the MA animation and illustration degrees. We were happy to see the design fit in well with our music as well. They then showed a few mock-up ideas of the artwork and we further discussed a plan.

I attempted to ask about their schedule but they did insist they weren’t sure and had an idea to try to finish a few more levels and games. The leader of their group spoke on how they wanted to initially create five levels and now feel like they will only manage three for the hand in. I and Will felt it was important for us on our next step to know how many levels and the motifs and emotions or what is going on, on each level for sound design and music-making.

A loose idea was spoken as they hadn’t finalised any other level except the first one but as it stands it’s.

Level 1: Language, symbol language, a combination of the symbol. Feedback screen/ operation screen. Collect elements they think to unlock the brain feedback section. 

Level 2: visual, the peak. Colours?

Level 3: 

Level 4: high-speed bpm, tempo

Level 5: Listening, music & sound

This is the current rough ideas of each level and the sound behind it and how they wish to continue. I and Will found this valuable towards our music composition, as this is very similar to what we learnt the last term with our sound for screen module and these things overlap. We really wanted to use sound to convey motifs, themes and emotions rather than just have music playing. As our game isn’t very immersive, it’s not the first person or third person or VR it’s difficult to create that immersion as the others.

We decided that perhaps because of the nature of the game, the levels are about creating different parts of a robot’s brain and it’s slowly developing consciousness that perhaps the music should match what is happening throughout the game? The development of the brain and its emotions can reflect the soundtrack and sounds of each level. The music could start atonal and develops into more complex compositions as the character develops a more completed brain as the levels go on. 

We thought this was a great idea and this has now become the premise of the sound design and score of the game. Jingya, unfortunately, couldn’t attend today as her train was cancelled but we will relay the information back to her. So we finished the meeting on a positive note, with more influences to create music and sounds. We were also given another reference for music with this film Eternal sunshine of the spotless mind which I will be watching to understand a little more. I also spoke on sound effects and when that would be the correct time to begin if we haven’t got a developed game yet that needs them? Will also spoke on Phonaesthetics which I hadn’t heard before but is the study of why sounds are satisfying and creating sound effects that give the listener a satisfying feeling when they click or move things around. I feel this will require further research as will making sounds and music for games. I will be reading a few books to inform myself further into this.

Before everyone left I asked the group to email us all the new artwork, new pdf they displayed and a screen recording of the demo of the game being played so we can attach music and see if our soundtracks are working for the game. And we planned to meet again next week same time.

Tuesday 22nd – First Production Session

I managed to get a hold of Will on Monday the 21st and I caught him up on the project, explained about the first initial meeting and what they had mentioned about the game. I sent him the PDF that the game designers showed us and the example for gameplay. Which was Florence. We then decided to meet up on the following day Tuesday the 22nd.

We booked out the composition room from 10-5pm on Tuesday and began discussing and playing around with sounds.

We loaded the Moog Matriarch and Prophet six in the composition lab and played with the arpeggiator. We looked again at the PDF and the artwork inspiration and decided the vintage 80s theme was kind of the idea presented here.

Will also brought his TR-08 and we MIDI synched the instruments together so it would all be in time. We then ended up recording long organic loops to showcase for the meeting the following day on Wednesday the 23rd. We ended up with a lot of tracks and lengthy performances to bounce and edit but it served more as a rough mood board of ideas.

We then selected around seven excerpts to bounce to WAV to showcase for our meeting the next day.

I have decided to as well learn synthesis for my own benefit. I have booked out the synth bench in the newly located space M113 Wednesday evening to continue with creating more sounds and ideas. At first, I was concerned with the sound not fitting but perhaps now as Walter Murch did with star wars, just make a lot of sounds and music.

Sound design for virtual reality | ZDNetURL – Reflection

Jacqueline B. is the Founder and CEO of Q Department. Which is a music and sound design studio since 2003. The story with how they got into VR was they ended up at Sundance festival and witnessed some VR pieces and since then they haven’t turned back.

She says it’s a new medium, she has been shown VR horror films. she spoke as if VR is something crazy and captivating. It’s a very compelling medium for sound.

The mars VR bus experience. 

They were approached by a production company,  the brief and idea were creating a new generation that would be interested in exploring space and science. So their role was to make and bring this alive. On the VR bus experience, you would get on a bus and after a while, the windows became a screen. A mars vehicle is transformed.

They used spatial audio in this experience, to create a deeper illusion of immersion. 

Spatial audio simulates how you listen in real life, and when done properly your brain thinks you are somewhere you’re not. VR with good sound is almost indistinguishable. With the visuals and sound effects combined, it can create a new level of reality.

She goes on to speak that audio quality is important for immersion, and it’s not just an audiophile thing. Something I didn’t consider at first but now seems obvious. 

Spatial sound is so powerful as a medium. She speculates a rise of spatial audio, specific content through spatial audio. This will be an interesting medium to work with. This came out in 2019 and now I would say it seems like her idea were correct, its in apple AirPods and pushed amongst Apple and their streaming platforms and OS. Apple usually pioneers and push other aspects into popularity. I can assume and imagine this to be a future thing, especially with the metaverse happening, sound is a key aspect of immersion into these digital realms.

Nailing storytelling is the future of VR and nailing new ways of telling stories and that’s where the content is compelling is enough everything will fall in place. 

I do think perhaps how I can incorporate this into my games design audio I’m working on? Even if it isn’t VR what can I take from this to add to it?

Collins, K. (2013) How is Interacting WITH Sound Different to Listening TO sound? Reflection

I found this theoretical excerpt about sound in video games and interactivity to be captivating. I have read and reflected on a few points. Overall the interesting points it made around what makes a user interact with sound? The types of listening do that occur during interacting with audio?

The article begins by stating a hypothesis that interacting with sound is fundamentally different in terms of our experience from listening without interacting; that there is a distinction between listening to sound, evoking sound already made (by pressing a button for instance), and creating sounds ( making new sounds). I completely agree with this point, I do find interacting with sound lends to a whole different experience. Interacting means you are almost a co-author in what is about to happen in the interactivity. If you choose to press this button or perhaps button two. The spatialisation of the sound as well brings immersion.

The excerpt makes a reflection on sound effects and music in an interactive context. And gives us a quote from Walter Murch.

sound effects fall midway between music and noise. 

Something I wanted to understand perhaps a little more and the excerpt continues by showing an interview with a games designer. It speaks on a game with bees and how the buzzing of the bees was made in time to the music, this made the whole world music and the sound effects to be part of the song. Every ambience in the game is rhythmic. Wood creaks and crickets and all the insects are making a beat. And everything is localised so it’s spatialised.

As well as this the excerpt makes notes on using pitch to showcase weight and size in characters. For example, in Mario Bros, the smaller enemies make a higher squeaking sound in contrast to an enemy the size of Bowser that has a deeper noise.

I have found the sound to be very similar for the screen as for video games. And the sound for screen module will come in very helpful when using sound and its motifs in the world.

Games Design Document Version One (Rough)

I have completed as much of the document as possible right now. I don’t have enough information to complete the whole thing as I’m sure it’s not needed but I did fill out as much as I could. We don’t have a lot planned right now and I’m unsure about Will and where he is. At the moment it’s only music they have asked so I’m going to make five short demo loops to present next week. I will update the document every week.

 A Narrative & Adventure Game Combined  Mental  Health/Human Body Theme Park

Dereck de Abreu Coelho.

Revision: 0.0.0

GDD Template Written by: Benjamin “HeadClot” Stanley

License 

If you use this in any of your games. Give credit in the GDD (this document) to Alec Markarian and Benjamin Stanley. We did work so you don’t have to.   

Feel free to Modify, redistribute but not sell this document.

TL;DR – Keep the credits section of this document intact and we are good and do not sell it.

Overview

What sets this project apart?

Story and Gameplay

Assets Needed

Schedule

– <Objective #1>

– <Objective #2>, <etc.>

Overview

Theme / Setting / Genre

The games current theme is around how the brain works. Educational and playful. To design the human body as a theme park and to show how the body works

Core Gameplay Mechanics Brief

The game is going to be similar to another called Florence. Mini games, that have an overall narrative as the player moves through different chapters of the game. Very easy and no winning or losing

Targeted platforms

Desktop Computers Mac/PC / Mobile? Apple/Android?

Project Scope 

– <Game Time Scale>

Four weeks until MA hand in, and crits for us.

– <Team Size>

Mingyi Liu- Games Design& Arts

Yunke Wang- Games Design& Engineering

Ziyu Yun- Games Design& Arts

Shih Kai Chuan- Narrative Design& Arts

Anlin Liu- Games Design& Arts

Peiwei Luo- Games Design

Dereck De Abreu Coelho- Music/Sound Supervisor

Will- Sound?

Jingya- Sound Effects

Influences (Brief)

– <Influence #1, #2, #3, etc>

The game is heavily influenced by a game called Florence which shares similar graphics and gameplay. 

The elevator Pitch

This game allows the user to enjoy a laid back experience that is educational and captivating. Guide your character on screen and transform the human body while dictating what amount of chemicals are needed to achieve success. 

Project Description (Brief):

<Two Paragraphs at least>

<No more than three paragraphs>

Project Description (Detailed)

<Four Paragraphs or more If needs be>

<No more than six paragraphs>

What sets this project apart?

– <Reason #1, #2, #3, etc.

Core Gameplay Mechanics (Detailed)

Game mechanics determine how the player interacts, the level of complexity, and even how easy or difficult the experience is.

– <Core Gameplay Mechanic #1, #2, #3, etc. >

– <Details>

/Describe in 2 Paragraphs or less/

– <How it works>

/Describe in 2 Paragraphs or less/

Story and Gameplay

Story (Brief)

<The Summary>

Story (Detailed)

<Go into as much detail as needs be>

<Spare no detail>

<You can use Mind Mapping software to get your point across>

Gameplay (Brief)

<The Summary version of below>

Gameplay (Detailed)

<Go into as much detail as needs be>

<Spare no detail>

<Combine this with the game mechanics section above>

Assets Needed

– 2D

– Textures

– Environment Textures

– Heightmap data (If applicable)

– List required data required – Example: DEM data of the entire UK.

– Etc.

– 3D

– Characters List

– Character #1, #2, #3, etc.

– Environmental Art Lists

– Example #1, #2, #3, etc.

– Sound

Music, Five tracks for each level 

– Outside

– Scene 1

– Scene 2 

– Scene 3

– etc.

– Inside

– Scene 1

– Scene 2

– Scene 3

– etc.

– Sound List (Player)

– Character Movement Sound List

– Example 1, Example 2, etc. 

– Character Hit / Collision Sound list

– Example 1, Example 2, etc.

– Other sounds

– Example 1, Example 2, etc.

– Animation

– Environment Animations 

– Example, etc.

– Character Animations 

– Player

– Example, etc.

– NPC

– Example, etc.

– Code [optional]

– Character Scripts (Player Pawn/Player Controller)

– Ambient Scripts (Runs in the background)

– Example, etc.

– NPC Scripts

– Example, etc.

Schedule [You can add here your Trello board or similar]

– <Music>

– Time Scale

– Week 1 create demos – Week two receive feedback –

– <Objective #2>

– Time Scale

– Milestone 1, Milestone 2, Etc.

– <Objective #3>

– Time Scale

– Milestone 1, Milestone 2, Etc.