Overview
Lyraflo is a pitch project at Carnegie Mellon’s Entertainment Technology Center discovering how virtual reality and music theory can be synergized to foster musical curiosity. We define musical curiosity as a self-driven pursuit, exploration, or reflection of music. Our experiences target university students who are musically-naïve, but musically-interested. Our goal is for our guests to emerge from the experience with an exposure to a particular musical topic and a piqued interest that enables them to begin asking better musical questions about said concept and how it applies in context to their favorite songs, game scores, film soundtracks, orchestral performances, etc. We are a team of music appreciators, lovers, and creators. Our team consists of Wizard Hsu (Programmer), Noah Kankanala (Producer and Sound Designer), Wish Kuo (Programmer), Yuji Sato (Asst. Producer and Sound Designer), and Jack Wesson (Artist and Designer). Our faculty advisors are John Dessler and Ricardo Washington. The origins of the project can be traced back to April 2021, when we came together as a team bounded through a shared desire to discover how VR could be leveraged to convey music theory concepts. We were motivated to explore solutions to a pertinent unmet need that we found salient within the world of musical learning. We bonded over our shared frustration at the lack of accessible and engaging music theory experiences for beginners, especially individuals at the college-age who were looking for an entry point to learn more about music in a non-academic environment. Our drive became to leverage the unique affordances of VR, such as isolation as a tactic, kinesthetic and haptic input/feedback, and spatialization, to create immersive experiences that sought to tackle our target demographics’ unmet need, while simultaneously exploring the unexcavated intersection of VR and music theory. These motivations manifested and morphed into the experience and project goals that define our project today:
Project Goal: Finding synergy between music theory and VR
Experience Goal: Fostering musical curiosity through the asking of better musical questions
Throughout the Fall 2021 Semester, we created three prototypes exploring chord structures, chord progressions, and chord tonalities. Each prototype yielded a series of successes and flaws in relation to how well the experience a) conveyed the musical concept, b) leveraged the unique affordances of VR, and c) abstracted the musical concept. Our third prototype yielded a valuable opportunity to culminate all of our learnings into a singular experience, and it suffices as a capstone and testimony for all that we have learned throughout our journey navigating, charting, and discovering the crossroads between VR, music theory, and transformational design.
Prototype 1
Our first prototype explored chord structures, pitch intervals, and dissonance. We incepted the concept leveraging a design madlib, which we perceived at the time to be helpful for bringing a quick, vivid mental model about what a potential Lyraflo Experience could look like. The madlib was as follows:
An exploration of __music theory__ where guests __verb__ the music, with __effect__ as feedback, in the style of __game genre__.
Essentially, this was a design paradigm or formula that could be used to quickly conceptualize a prototype. Our first prototype followed the formula of:
An exploration of __pitch structures, chord structures, and dissonance__ where guests __pull__ musical notes, with __visuals and harmony__ as feedback, in the style of __a tower defense___.
In this prototype, at the moment to moment level, guests use a pullback mechanism to fire pitches at dissonant chords in order to resolve them. While the madlib was helpful for quickly conceptualizing an idea, it had boxed creativity and hindered experimentation by planting a set of rigid game constraints on our experience design. We intended for the game genre pillar to be a helpful tool to facilitate design by leveraging previously existing tropes and containers. For instance, in a tower-defense game, guests are swarmed by enemies and must use a firing mechanism to destroy them. The madlib was good because it helped us identify a novel mechanic of firing pitches that felt both enriching and informative, but bad because it yielded expectations that placed emphasis on gameplay rather than musical conveyance. On a philosophical level, it hindered our ability to be experimentative and unorthodox, because we were borrowing from conventional avenues.
The madlib lacked the necessary framework for justifying and determining why we were doing what we were doing, and ironically, we ended up breaking its rules, which at first seemed simple. We were supposed to only pick one musical concept, but our final result of the prototype ended up exploring several musical concepts. This was evidence that the madlib resulted in haphazard design bounded by nebulous guidelines. It wasn’t facilitating structured, careful, justified, decision making, so we ditched it and iterated to a stronger design paradigm. More on that in a bit.
Through playtesting, we learned that the main strength of this prototype was the physical interaction and sense of embodiment that emerged from pulling back on the crossbow, and creating and firing musical notes. The real-time sonic and haptic feedback engaged guests and enabled them to develop a spatial relationship between notes, and better understand how distance and proximity influence note relationships.
We learned that the prototype explored too many domain concepts, which added complexity and confusion. A lack of focus and clarity on a singular musical topic meant that guests had to grapple with simultaneous concepts such as dissonance, pitch relationships, and chord structures, all at once. We learned that the topic of chord structures was an incredibly difficult concept to convey without an understanding of rudimentary music theory. In our future prototypes, we would go onto champion this learning lesson by focusing exclusively on one musical topic. We would also learn to delineate our experiences through a 1,2,3 staging system: a) exposure stage, b) experience stage, and c) experience stage. Additionally, we would go onto focus on high-level concepts such as major and minor, and progression; topics that could be understood as a sum of parts, rather than individual, granular, low-level nuts and bolts. This decision aligned with our goal of fostering musical curiosity and indirectly nudging the guest to engage with music on a more critical level outside of the experience, rather than attempting to teach them meticulous, low-level concepts in 2-3 min play sessions.
Another shortcoming of this prototype was the confusing and complex representation of chords. The enemies or targets in the game were a combination of both literal and abstract symbols. They were literal in the fact that they were actual notes without proper spacings, intended to represent each unique chord structure, but abstract by the ways in which they floated in space without a proper musical staff background. We identified this contradiction, and moving forward, made the decision to represent the musical concepts abstractly, rather than literally, as this aligned with our experience goal – to foster musical curiosity. The symbols and nomenclature were not important to us; the feeling and intuition about how a musical concept felt was, and we deduced that abstraction was best for capturing and stimulating those sentiments.
Lastly, we learned about the importance of sound and visual priority. Sonically, we learned that too many sounds overwhelm guests and envelop them in chaos, resulting in confusion about where to focus or what to focus on. If we were to ever iterate this prototype in the future, we could add a lock-on mechanic that ducks or high-passes extraneous sounds in the scene, so that the guest only hears the particular chord they are locked onto. This would enable them to really channel their focus onto the sound, and identify where the dissonance lays. A learning lesson is that as an audio project, we have to be decisive about where we put sound, and what sound gets priority. We would go on to really leverage and apply this lesson in our third prototype, by placing all of the focus on the music, and less on the environment.
Visually, we learned that the aesthetics of a world greatly influence learning. In this prototype, guests were enveloped in a stimulating, entrancing world of cotton candy pink and blue vibrances.. However, we learned that the environment almost had too much of an emphasis and emotional stimulation, which led to a sense of conflict with the musical conveyance. Moving forward, we would combat this by honing in on our visual and artistic design to be more child-like, cardboard-like, and toy-box-like, to create a metaphor of being a kid again. We would aim to evoke feelings of permission and the vulnerability to learn. We found this aesthetic to be incredibly successful, especially in our third prototype, where the toy-box like figurines and architecture coupled with the sand-box, train set-like diorama unconsciously invited guests to lean in and inspect the music and environment on a deeper level.
Prototype 2
Our second prototype explored chord progressions, and we attempted to leverage the strengths and failures of our first prototype into the design of this one. While the brainstorming process of prototype 1 was narrow and contrived in the way that we really only generated one idea, we responded by leveraging creative chaos in prototype 2’s ideation, as a means to externalize any and all ideas. This blue-sky approach felt like a step in the right direction as it allowed for more ideative experimentation, while also yielding valuable source material that if not directly applied to prototype 2, definitely influenced future prototype design, and highlighted the potential of the space. During the brainstorming process, we also came up with a set of pillars or metrics that we could use to evaluate our ideas. The three categories consisted of:
Abstraction: How well does it use metaphor and abstraction
VR: how well does it use the VR space
Musical Conveyance: How well does it convey the musical concept
We picked these three categories, because we felt like they were three pivotal pillars for shaping experiences that fit both our project and experience goal. The highest rated idea was called “Last Minute Accordion,” and in this concept, guests learned about the notion of chord progressions as atmosphere, by playing popular progressions on an accordion-like device.
In this prototype, we took a horizontal-squeezing mechanic that leveraged the success of the first prototype – the interactivity of creating musical notes – and imbedded it into a container or context that allowed for the guest to directly interact with an environment in order to make chord progressions while being given environmental response and feedback. This prototype went through many different iterations, and we struggled to foster the mechanic in a way that made sense. For a while, it felt like we had a really cool toy – the accordion – but no concrete experience that allowed for the guest to learn about the notion of chord progressions as atmosphere. But, we were able to create an experience that sought to render harmony between environment, learning lesson, and music.
After many different iterations and paper prototypes, we ended up making an experience where guests use their accordion to stretch and create chords. In the world, which feels like a Salvidor Dali painting of surrealism, guests are surrounded by four pipes in the scene, representing a four bar progression. They can lock onto a pipe, and blow a chord into the pipe, which sends wind through the pipe and out a wind vent that then propels upwards a puzzle piece. The four puzzle pieces – when the progression is correctly played – then amalgamate into a symbol that bursts into the air, enveloping the world with environmental feedback. A musical cannon in the scene can then be fired at any point by jumping, which fires a projectile that traces the desired trajectory. The puzzle pieces floating in the air, whose position is determined by which chord the guest implanted into the pipe, serve as dots to demonstrate what was played in comparison to what is needed to solve the puzzle. The wind height of each vent is equal to the chord degree. For instance, the root chord blows up 1 unit of wind, while the 7 chord blows up 7 units of wind. In the first stage of the experience, guests learn about the 1-4-5-1, a progression that exudes feelings of joy and happiness, and the 4 puzzle pieces on the wind vents, cohere into a sun that rises up into the sky when the guest plays the correct progression.
From playtesting, we learned the aesthetic of the experience was really successful. Guests loved the cardboard aesthetic of the world and the straw pipes, and that there was consistency between all of the elements in the scene. Additionally, they really enjoyed the feeling of playing the accordion.
However, we found that this experience was too difficult because it had so many interactions and game rules occuring, which hindered and diminished the musical conveyance. Although there were an abundance of sonic clues informing the guest of what progression they needed to play (a guitar-man docent in the scene that plays the progression upon collision, and chord sounds that emanate when the guests lock onto a pipe, inform them of what chord is needed), we found that guests were still unable to complete the experience and truly discern which chord was needed for which pipe. Additionally, a problem in conveyance design manifested, as many of the chords shared two notes which resulted in the guests having difficulty discerning the idiosyncratic differences between chords. On another note, because of the real-time hover feedback when stretching the accordion, guests were unable to truly identify any form of relationship between particular chords, because they would hear all of the chords in sequence, as they moved up and down the accordion from one chord to another. Lastly, because there was only one stage, meaning only one progression, it was difficult for guests to understand and learn about music as an atmosphere, because there was no comparative value. We learned from one playtester that having a comparison stage would help them better understand that the 1-4-5-1 evokes feelings of sun and brightness, because they could compare the sun and the song with the rain and the song of something melancholic such as the 6-3-4-5. This notion and desire for comparison is something that we would go onto heavily lean into in our final prototype, and we found it to be incredibly successful for conveying a musical concept.
Overall, we hypothesized that the prototype would be successful because it had strong harmony between interaction, environment, and music. However, there were too many abstract parts and variables moving at once, which distracted from the focus of the experience – conveying how chord progressions can convey atmosphere. The main learning lesson from this prototype was that “keeping the music first” is the priority, and that it is vital that the interactivity and abstraction does not hinder, but enhance the learning concept.
Prototype 3
Our first two prototypes had been incredibly valuable because we were finding out what was working and what wasn’t, and we were incredibly eager to impose these learnings into our third prototype. When ideating our third prototype, we landed on an “aha!” moment as a team, which sparked intense excitement. We were playing around with different progressions, when we realized how interesting it sounded when a chord in a progression was shifted from major to minor, or vice versa. Just that one shift, dramatically changed the emotional tone of the music. We had landed on our prototype concept. Our third prototype would explore how we could convey the concept of major and minor to musically naive audiences. Our idea was to combine visuals with music to help the players better understand the concept of tonality.
We began the prototype by devoting time to the paper prototyping phase. We had employed paper prototyping in our second prototype to gain an understanding of how guests responded and reacted to differing amounts of control, and found it incredibly successful for gathering intel on how our target audience felt about a concept. So in this prototype, we first began with multiple paper prototypes that helped us understand how people perceived major and minor. We did this by creating two variables; a collage of various images, and music composed in three variations: major, minor, and a mix of major and minor. We had our testers listen to the music and choose which images they thought fit with each variation. By doing so, we were able to find what kind of common visual associations people made with major and minor. We iterated on this paper prototype three times until we were able to quite clearly understand what kind of visuals fit best with each tonality.
A massive success of this prototype was the focus and attention on paper prototyping as a vehicle to first assess and determine how our target audience felt about the musical topic, and how they engaged with it. At the core, paper level, was the prototype design achieving a level of musical conveyance and achieving the learning goal? We found that in this case, it was.
After getting a good idea of what kind of visuals we could work with, we began developing our prototype in VR. The basic premise of our prototype was that players would be able to change the tonality of music in the experience, and doing so would cause the environment to also change in a way that supplemented the player’s learning. We utilized lighting, particle effects, and different 3D models to make major environments more welcoming and warm, while minor environments were darker and colder in tone. Some playtesting revealed that the visuals were at times too distracting, so we iterated by making the visual changes more gradual, to make it seems as though the changes in the music were causing the changes in visuals, not the opposite.
After testing with 22 naive guests, some of whom had never heard of major and minor before, 20 were able to correctly understand the difference between these tonalities.
Some successes of this prototype was that we managed to use the VR space in a way that supports our guests to focus on the audio and guide them to listen to the subtle changes in it caused by major and minor. We found that the level of control that VR gives us over environment and engagement was crucial in directing the guest’s attention to what truly matters when learning music theory concepts.
However, one failure of this prototype was that we never managed to nail down the final aspect of our experience flow. We originally planned for our experience to have three steps; exposure to concept, experimentation with concept, application of concept. We successfully created an onboarding step that exposes the guests to our concept, and an experimental phase where guests can play with the concept and both sonically and visually comprehend how it works. We planned for a final step in our prototype where guests could then apply what they learned. We were passionate about the idea of creating a simple narrative where guests advance the story beats by manipulating a piece of music from major to minor. For instance, one scene delineates that the protagonist is from a village. The guest is then asked what this village is like. Picking major transcribes the song to major, and the narration says it is a sunny village where everyone works together. But picking minor transcribes the song to minor, and the narration says it is a village where times are tough and food is thin, but that the people have a persistent attitude and are able to work together to make ends meet. We were compelled at this prospect of using a story as a vehicle for guests to apply their learnings. We created several different paper prototypes of this stage, before finally moving into VR and developing an interactive storybook featuring three different beats or decisions. We received conflicting feedback on this stage – some guests really enjoyed the opportunity for the experience to be reduced to solely story and audio, where they must drive the story through musical decisions. Others were disappointed at the lack of visuals, and felt like they were being punished. If given more time, we would focus attention on improving this stage, so that the experience has a clear entry-point, mid-point, and exit-point. Right now, the entry and mid point feel very strong, but the exit-point could be strengthened.
Discoveries
Overall, the project has been successful because we have learned many valuable lessons about conveying musical topics in virtual reality. For the sake of summary, we learned that “keeping the music first” is crucial to achieving a successful experience. The mechanic, interaction, or environment should never take precedence over the musical concept. Learning felt most successful when guests were given autonomy to make decisions, but limited to binary or trinary choice. The ability to audition, compare, and juxtapose sounds and music was incredibly valuable because guests could develop a mental model along the lines of while X sounded like this, Y sounded like that. The visual aesthetics of the experience can help envelop and immerse guests into a space, indirectly controlling and inviting them to be vulnerable and granting them a permission to learn. We learned that oftentimes, simpler is better. Our first two prototypes consisted of more complex interactions. The third prototype regressed to a simple interface that yielded musical changes in only one step of interaction. We hypothesize that the less interactive steps to manipulate the music, or engage with the learning concept, the better a chance of concept retention.
Future Applications
We believe that there is an endless amount of possibility in this space, and that we have only discovered and explored an infinitesimal fraction of it. We urge that more teams, products, and projects in the future explore this space, and that as more findings are revealed, different paths of development can manifest. On one hand, educators could look into this space as a segment to create educational content that is both enriching and fun. On the other hand, game designers or those in the entertainment sphere, could lean into this domain to create exciting musical-gameplay experiences. We believe that our findings can be beneficial cross-discipline, and that these learnings could be applied for other transformational experiences in the VR space, exploring topics such as cooking, art, or history.