In addition to Nature of Code I am also taking the Machine Learning for the Web class at ITP. Conveniently, this week in both classes, we went over the teachable machine and different methods like regression and KNN classification. Thus, this week I took some time going through the teachable machine tutorials and examples and built my own teachable machine image classifiers. Ultimately I built a program that allows you to type a word on a screen using sign language. Figure 1 shows the process of training this program.
Figure 2 shows the program after it has been trained. The user signs the letters and writes the word “hello.”
Overall, this program works well, however I wonder how well it would work if I saved a model using my hand and then someone else tried to use the program without adding any new images into the model. Ideally, after this program is trained, a user could sign without ever having to touch the keyboard. The way the program is build now, when the machine detects a letter it is printed below the canvas. A letter is only added to the text on the canvas when the user types the enter key. Perhaps in future iterations, an interaction can be made to tell the program when to add a letter without the use of the keyboard. With more experimentation I see how this project could have a potential to more intelligently type using sign language. Perhaps the machine could also dictate what the user is signing.