For my final project I would like to make a weather app that helps you to pick what to wear. I will use a voice classifier to record the user saying what they are wearing. The classifier will be able to identify key works such as tee shirt, jeans, and sweater. Then the app will look up the weather at the users location. The output will give you the weather and a verbal (or written) recommendation on how appropriate you outfit is for that weather. Another addition to the application could be to bring in the quick draw data set from Google. In addition to a weather output, the program could draw doodles from the dataset that would represent that weather outside.
For this weeks assignment I used an audio input to make a very simple game. The program contains a single ball (“agent”) that falls according to gravity. The object of the game is to not let the agent fall to the bottom of the canvas. When the program detects that the word “happy” was said, an upward force is applied to the agent. To prevent the ball from falling, the user must say happy to push the ball up and keep it afloat. Figure 1 shows an example of the game being played.
The motivation for the game was to improve the mood of users. It is often said that if you smile you start to feel happier, or if you say positive things you mood improves. Thus, by saying happy to keep the agent afloat, the game will hopefully improve the users mood. Of course this is just an initial step to a large project regarding mood discovery. In the future I would like to add more behaviors to the sketch based on different words that are detected. If negative things are said the agent will be negatively effected, if positive things are said the agent will be positively effected.
This week I experimented more with image classifiers and built regression and classification examples. In the end, I used KNN classification to build a program that detects sign language and will type the sign your show on the screen. Figure 1 shows the process of training the model. To train the model I show the camera a letter in sign language and then clicked the corresponding keyboard key.
Once the program was trained I wrote the world hello by signing to the camera (Figure 2).
This week I experimented with the ml5.js MobelNet image classifier example. Using this pre-trained model I made a bedtime checklist website. When you scan items on the checklist, the items are marked green (Figure 1).
I see a possibility to make this into a real app or website. Before going to bed the user would scan all the items in their list that prove that they completed certain bedtime routine tasks. In order for this application to work better, it might be useful to add both a pre-trained model and an opportunity for the user to add addition training to that model for their own use case. For instance the pre-train model may have many images of toothbrushes that help the model detect the toothbrush. Then the user should be able to add images of their own toothbrush in the lighting and area in their own bathroom. This way the model would be even more accurate for individual users.
Click here to see the code
Click here to test the example