Post Mortem

Amuseum is a client project composed of one artist, Bella Lin, one technical artist, Adelyn Jin, two programmers, Wizard Hsu and Hao Lu, and one designer, Jack McClain. The client for this project was Anne Fullenkamp at the MuseumLab of the Children’s Museum of Pittsburgh.

The initial ask of us at the start of the project was to create an installation for the MuseumLab inspired by the work of Rube Goldberg. The Children’s Museum had already created a full exhibit focused on Rube Goldberg machines in the part of the museum intended for the 8 and under audience. Our installation was meant to be seen and played with by the 9-14 audience that typically visits the MuseumLab space. After their experience in creating Rube Goldberg machines intended for repeated use, the main thing that they wanted to include in a next iteration of their machines was augmented reality interactions to help bring certain elements to life. Many of the original cartoons from Goldberg include chain reactions that rely on animal or human helpers, are sure to make a mess, or necessitate intensive cleanup between runnings. AR has the potential to connect to those original ideas, while still crossing over with the juicy tangibility of a physical Rube Goldberg machine.

In order to go about building the AR scene, we worked with the ARENA team at the Conix Research Center. This came at the suggestion of several resources at the ETC, including some who regularly work on projects with the Children’s Museum. ARENA boasts some impressive networking capabilities, but at the time of our project, was still very much in development. Some of the challenges involved with this will be discussed later, but at a high level, ARENA is a great lightweight AR platform for our needs.

Our final deliverable includes the full set of props which we acquired through the semester, the virtual scene within ARENA, and a thorough documentation packet to help in setup and maintenance of the machine.

Rube Goldberg’s cartoons invoke a sense of the absurd and unexpected in accomplishing their goals. This was an early constraint for us, that ended up being more of a liberator that anything else. We did some preliminary research at the start of the project before jumping into imagining a large number of stories to potentially tell with our machine. This gave us a context for the types of interactions to include. However as the semester progressed, we occasionally felt as if the design that we had selected had backed us into a corner in some ways. The story we were telling had a certain number of hard requirements that we didn’t consider or didn’t know to expect just a few weeks into the semester. That sense of absurdity was something that we felt that we could lean on in those moments of feeling stuck. As a group, we were very adaptable and weren’t afraid to sit down and brainstorm with no boundaries before coming back to reality. We learned how to play off the aesthetic of the cartoons to navigate our way through the interaction.

One of the early facets of the project that we needed to address was the degree to which the machine would be physical vs. virtual. When working in AR there’s always the potential that the virtual elements take priority over the reality part of the experience. Throughout the project, even up until the last several weeks, we focused on how we could increase the number of ‘handshake’ moments between the physical space and the virtual interactions. Many of our props act mostly as stationary elements, which risks just being just background for a more interesting set of AR animations. Instead we pushed to bring our physical props to life with spatial audio, optical sensors, and servos. All of these elements work in tandem with an animated object to create richer, deeper versions of both.

While we began prototyping early on in the project with several of our final virtual elements, something that would have helped us with our final deliverable was getting our physical props earlier. One of the fears that we had revolved around ordering the wrong piece or something that ultimately wouldn’t fit into the interaction. This hesitancy held back our development in that we were building the virtual scene around a prototype physical setup made up stacked chairs, tables, cushions, and any other odds and ends we could find to simulate what we were drawing up. This locked us into a certain pattern in which we didn’t get to fully experience several of the interactions until very late in the semester. So the reworks that occurred after those discoveries did not get as much time and thought as others. Also, for pieces like the servos that weren’t added in until the last few weeks of the project, we didn’t give ourselves nearly enough time to prototype with them fully.

Another issue that we had to face in our project was the classic AR challenge of occlusion. Early in the process some of our central interactions did not take this issue seriously, hoping that suspended disbelief would help to smooth the jarring effect of an object floating on top of something it should sit behind. The big example of this was the AR bird in a physical bird cage. This interaction is a huge part of our machine and was one of the earliest interactions that we designed for, meaning it got grandfathered in and overlooked as the project developed. Subsequent AR elements were designed to sit out in the open, with the expectation that they would sit on top of a physical element, and ultimately the bird interaction was updated to address this issue.

The final major challenge that we faced involved using ARENA for our project. ARENA, while certainly robust in some of its major feature sets, was still very much in development through the course of our project. The platform would frequently be down or some part of it would not work as intended on a weekly basis, blocking build progress on our end. The major concern about this however, is how this will affect the client once we pass it off. It is unclear what the long term stability of the machine will be once we are gone and ARENA continues to update. Because of the always online nature of the platform, that brings in the great networking and multi-user capabilities, it will update regularly. There’s no offline option that we could use to make this work.

Following the conclusion of the semester, we are delivering the machine’s parts over to the museum. While they are closed for the next several months through the summer due to Covid-19, at least, they will be opening a pop-up experience at the Southside Works in July. They will be installing our project there initially, and when that experience closes, it will move back over to the museum to be installed in the Grable Gallery as they work towards reopening.

We’re all incredibly proud of the final installation that we have built and excited to see it on location this summer. While working on a location-based project was a major challenge when working remotely, we feel that we were able to overcome these obstacles and create a rich, funny experience. We’re certain to carry forward everything that we learned from this semester to future projects and experiences. On a final note we would like to extend our gratitude to the ETC for the opportunity to work on this project and our client with the Children’s Museum, who was an incredible resource throughout the entire process.