Week 6-7: Final Project


For my final project I used the Google Teachable Machine Audio Classifier to build a speech hangman game.  The goal of the project was to use the audio classifier to clearly detect each letter of the alphabet.  The hangman program is one output use for this trained model however there are many possible uses for an audio model trained on the alphabet.


First, I built a simple webpage with the hangman game (Figure 1).  In this iteration I connected keyboard presses to the game.  Thus the game is being played when the user presses keys on the computer keyboard.

I then trained a small model on just the letters “h” and “a” and was able to implement this model into the hangman example (Figure 2).  When a wrong letter is guessed, more of the man is drawn and the letter in the alphabet turns red to indicate that you already guessed that letter. 

I then trained a full model on the entire alphabet.  Figure 3 shows the final example running with a fully trained alphabet model. 


While this project works relatively well for this small hangman game, it is not a perfectly trained model.  This project was a good example of how complex speech recognition algorithms need to be in order to be accurate.  I notices that letters like “x” and “s” were harder to train since they sound quite similar.  I actually ended up training the alphabet multiple times to try to get the best result for the hangman game.  Hopefully, others can use this model for other implementations in the future. 

Link to project:




  1. https://friends-dot-teachable-machine-pro.appspot.com/example/audioExample

Week 5: Final Project Progress

This week for my final project I played around building the outfit picker I proposed last week.  I began to work out the UI and bring in weather data from an API.  Figure 1 shows the current state of this project. The code can be found here and the example can be viewed here

I then began to play around with another idea that I had called Speech Hangman.  The idea is that I will create a simple hangman game.  In order to guess a letter the user will verbally say that letter.  I will train the project on the entire alphabet so that the game can be played audibly.   Figure 2 shows the current state of this project.  The code can be found here and the example can be viewed here.

In the end I decided to continue with this project as my final project since it excited me more.  Next I will improve the UI and implement the trained audio. 

Week 4: Final Project Proposal

For my final project I would like to make a weather app that helps you to pick what to wear.  I will use a voice classifier to record the user saying what they are wearing.  The classifier will be able to identify key works such as tee shirt, jeans, and sweater. Then the app will look up the weather at the users location.  The output will give you the weather and a verbal (or written) recommendation on how appropriate you outfit is for that weather.  Another addition to the application could be to bring in the quick draw data set from Google.  In addition to a weather output, the program could draw doodles from the dataset that would represent that weather outside.  

Week 3: Mood Agent

For this weeks assignment I used an audio input to make a very simple game.  The program contains a single ball (“agent”) that falls according to gravity.  The object of the game is to not let the agent fall to the bottom of the canvas.  When the program detects that the word “happy” was said, an upward force is applied to the agent.  To prevent the ball from falling, the user must say happy to push the ball up and keep it afloat.  Figure 1 shows an example of the game being played.

The motivation for the game was to improve the mood of users.  It is often said that if you smile you start to feel happier, or if you say positive things you mood improves.  Thus, by saying happy to keep the agent afloat, the game will hopefully improve the users mood.  Of course this is just an initial step to a large project regarding mood discovery.  In the future I would like to add more behaviors to the sketch based on different words that are detected.  If negative things are said the agent will be negatively effected, if positive things are said the agent will be positively effected. 

Week 2: Writing with Sign Language

This week I experimented more with image classifiers and built regression and classification examples.  In the end, I used KNN classification to build a program that detects sign language and will type the sign your show on the screen.  Figure 1 shows the process of training the model.  To train the model I show the camera a letter in sign language and then clicked the corresponding keyboard key.

 Once the program was trained I wrote the world hello by signing to the camera (Figure 2). 

Week 1: Bedtime Routine Image Classifier

This week I experimented with the ml5.js MobelNet image classifier example.  Using this pre-trained model I made a bedtime checklist website.  When you scan items on the checklist, the items are marked green (Figure 1). 

I see a possibility to make this into a real app or website.  Before going to bed the user would scan all the items in their list that prove that they completed certain bedtime routine tasks.  In order for this application to work better, it might be useful to add both a pre-trained model and an opportunity for the user to add addition training to that model for their own use case.  For instance the pre-train model may have many images of toothbrushes that help the model detect the toothbrush.  Then the user should be able to add images of their own toothbrush in the lighting and area in their own bathroom.  This way the model would be even more accurate for individual users. 


Click here to see the code

Click here to test the example