Week 6 – Prototype, Playtest, and Iterate

Paper Prototypes

When we were in the process of editing videos and integration of AI Chatbot and 360 videos in Unity, we used the paper prototype for playtesting parallelly. From a game designer perspective, I would highly recommend doing paper prototyping in the early stage if possible. 

Basically, a paper prototype is a good tool to testify to core game flow and concept. It used a very low cost to find the blind spot in gameplay design. For instance, since our main experience is making the conversation, we are ready for our paper prototype when we finish the conversation tree. Therefore, this week, we set up playtest sessions via Zoom and arranged a few interviews with medical students, thanks to clients helping us organize a six-diverse-background-person focus group.

Our Playtester: Varun

Playtest: Paper Prototype

These Interviews consist of three parts: introduction, playtest, and feedback.

First of all, we briefly introduce ourselves and the vision of our project. Then, after having a rough idea of our purpose, the interviewer is asked to imitate the process of dealing with a pregnant woman for her 30-week standard check-in. Like the setting, there are no other scripts nor hints for the interviewers, so that they have to figure out what to say and finish the conversation with our actor playing the patient. In the end, we would discuss our observations and ask for their feedback.

Surprisingly, playtesters, who are well-educated as second-year medical students, generally respond as we expected. They are trained to follow some rules for asking questions and following up, such as repeating what patients said for confirmation. Considering medical students’ professions, they almost stick to our designed direction and barely ask questions out of the boundary. In other words, for a professional training purpose, trainees have rare intentions to probe the edge and break the designed flow, because they are supposed to show their profession, which implies following the standard. This finding gives us great confidence in our current AI Chatbot solution. We are glad to know our target users are well-educated and following-direction-oriented. To be honest, handling the open-ended questions from players and responding in text or speech is difficult and tricky, even Siri or Alexa doesn’t perform well yet, but handling open-ended questions by playing pre-recording video is totally another story!

Playtest: Voiceflow Prototype

We are happy for the understanding with our target users. However, we still need to probe the limitations of our speech-detection model, including noise-reducing, accent, and phrasing. This time, we would like to bring naive guests, a group of people with various backgrounds and sound/voice characteristics. ETC peers should be a good source.

For this playtest, we are going to use Voiceflow Prototype. As a component of our AI Chatbot, Voiceflow is functionalized as processing inputs and matching them to our conversation tree. It provides its native interfaces for testing a pre-trained chatbot model. Therefore, we ask some students to talk to the chatbot and read the responses through the link: [Insert the link of Voiceflow Prototype]

Voiceflow playtest interface

Naive guests always bring surprises. For instance, people have a thousand ways of greeting. Some ask the patient’s name first, some self-introduce first, some care about the patient’s feeling today, and some of them just say “Hi.” Also, perhaps their accent or pronunciation, some people’s words are easily detected in the wrong way. Those findings make us reflect on our flow design and think how we could improve indirect control to guide people saying what we want and also what we can detect correctly.

Iterate Conversation Tree: Integration and Guiding Hint

Leveraging the observation of playtests and feedback from interviews, we decided to improve our conversation tree to better control our training experience. First, we noticed there are some times during the conversation that we leave too much freedom to players so they take the lead and ask various questions to continue the conversation. Fortunately, when we try to organize those possible leading questions from players, people are still inclined to follow the context, so their leading questions usually have some pattern. For example, when the virtual patient telling doctors(players) she has concerns for her blood pressure, people may ask some follow-up questions such as “What’s your normal blood pressure?”, “How frequently do you measure your blood pressure?”, “Do you have any other syndrome?”, or “Have you had high blood pressure before?”, and etc. Therefore, we choose to integrate the answers of those related questions to one reply, so players will get the total answers for other related questions, which ceases the chance people ask any similar questions and continue the conversation in the direction we want.

Second, under the limitation of AI Chatbot speech detecting, we want to avoid the dummy loop of “Sorry, I can’t understand what you are saying.” so we set up some guiding hints especially for the no match or low-confidence-match situation of open-end questions. For instance, in the greeting stage, players might say something out of our prepared scripts and cause a low-confidence-match. If the system is not confident with what it detected, consequently, it will ask clarifying questions like “I thought you’re greeting me, right?” It may sounds stupid but it’s better than doing nothing but a pause or a crash.