Exploring ProtoPie: Designing a voice assistant prototype
ProtoPie was launched in 2017 and till date I think it is one of the most underrated prototyping tool in existence. The wide range of features it provides is unmatched. You can literally design an app with motion gesture and voice input and output features and test it on your own phone without writing a single line of code. I personally found it as a very useful design resource.
I recently started exploring ProtoPie and the feature which caught my attention was the speech recognition and text-to-speech conversion. I decided to learn it to some extent and make a prototype for a voice assistant app. I think this feature can be extremely useful for companies which provide AI-based voice chat services and products. One can save a lot of time by just designing a working prototype with speech recognition before handing off to the developers. The prototype itself can be used for testing instead of writing the code for everything first.
I made the design in figma and then imported it to protopie. Now, I think they’re still improving on this feature because at times there are errors during importing the files but nothing major to worry about.
Design
My main objective of this project is to start learning ProtoPie and hence I didn’t spend too much time on designing the static screen of the voice assistant in figma. I included the basic components like a mic button which puts the assistant in listening mode on a tap and two utility options commonly found in voice assistant services.
I’m a big fan of different shades of blue color and tend to use it for most of my designs but I like to grab opportunities for experimenting stuff and since this was a personal project to learn something new, I tried out different shades of orange and yellow and I can say it looked decent enough.
For the overall design of the voice assistant I took inspiration from a design by Johny Vino . As I said this project was more focused on learning ProtoPie rather than designing, so I didn’t want to waste much time on the design part.
Prototype
I followed a tutorial by the official channel of ProtoPie on youtube. You can find the tutorial video here.
I won’t go into the very details of how ProtoPie works but instead I will explain my process of making the prototype in brief.
After importing my Figma design, I used a start trigger which is responsible for everything that happens when the prototype starts running. Within the start trigger, I added a Text to speech response which converts whatever text you want into speech. For instance, I want to hear “ Hey Pranav, is there anything I can help with ?” as soon as the prototype starts to run. This is essential to give the user a personal feeling upon opening the app.
The user needs to tap the mic after the voice over ends so that the assistant can start listening. This may not be very intuitive for everyone and hence I added a default text with around 23% opacity which appears as long as the voice over continues.
As soon as I tap the mic button, the assistant should start listening to me but there should be some visual feedback for that. How would I know that the app is actually listening to me after I have tapped the button ? To solve this I imported a lottie animation of a sound wave and associated it with a tap trigger of the mic button. The playback response of the lottie animation triggers it to start playing as soon as the button is tapped.
Once the app starts listening to me, I want my words to be displayed on the screen as long as I continue to speak. For this, I use a “recognize speech” trigger and add a text response to it.
ProtoPie allows you to set a specific response for any phrase. For example if I say — “Hey, there!” , I would like to see and hear something on the lines of “Hello, nice to see you again” . So, in the ‘recognize speech’ trigger I added “hey”, “hello” and “hi” in the list of phrases which should be included as stimulus for a response. After that I added a response of “Well hello there, nice to see you again” within the recognize speech trigger. I set the default text opacity to 0% and set to increase to 100% with certain amount of time delay once the app finishes listening. Similarly triggers and responses can be applied to other elements like images if you want an image to appear as an answer to your question.
Similarly any other command and response can be added to be recognized by the app and the prototype is ready to run. I designed 3 test cases for my prototype.
Test Cases
- “Hey”, “Hi”, “Hello”
2. “How many people have recovered from coronavirus in India?” (as of 20th October 2020)
3. “Who is the owner of Aviation American Gin ?”
Learnings and Improvements possible
- Pretty obvious but I learnt a lot about ProtoPie. This tool can be uses in SO MANY ways to create amazing prototypes. I think the most important and basic thing I learnt was how to setup triggers and responses in the right way so as to create exactly what you want. Triggers and responses are the foundation of ProtoPie. Another important aspect is the formula and variables part which I’m going to learn soon.
- I also actually used lottie animations in a project for the first time and got to learn how they work. Before this, I had a brief idea about what they are but using them in an actual project helped me to learn a lot. My favorite part was the lottie editor where one can change everything about the design of the animation from colors to sizes. So, this project made me consider lottie animations for my next projects for sure.
- Since, I designed this project during an ongoing hectic online semester, I didn’t get enough time to improve the design part. There is a LOT of space for improvement especially like the micro interactions which can be made much more smooth and intuitive.
Thanks for reading this case study till the end 😀.
Do let me know your views on this. You can reach out to me on my email, LinkedIn or on Twitter . You can also checkout my portfolio for more interesting case studies like this.