Balin Fleming, Rouzbeh Modarresi-Yazdi, Akash Banerjee, Ethan Farber, Lauren Keyes, Abdullateef Shodunke
American Sign Language (ASL) is the first language of more than 250,000 people in the US and Canada. Despite the large population of people who use the language, automatic translation of the language is not yet widespread. This is in part due to challenges of obtaining high quality data for the images to be properly translated.
The goal of our project is to achieve high quality translation of ASL using publicly available data and convolutional neural networks to accurately classify images. In the future we hope to be able to recognize video capture of ASL.
We train with a large dataset of about 26,500 images. Our project has far reaching uses in making global communication easier to multiple stakeholders.