ASL Detection with Tensorflow.
- Arturo Arriaga

- Dec 21, 2021
- 1 min read

Motivation
This project resulted from an interest in combing gesture recognition and computer vision. It was completed with two other members from a course on Wearable Technology and Computer Vision at the Harvard Extension School.
Members: John Vallente, Arturo Arriaga, Hunter Carty
Two members of the group (Arturo and Hunter) know American Sign Language and all three members wanted to explore the fundamentals of computer vision. After considering variations of the project, we ultimately settled on a system that would meet the following group interests.
Address the needs of American Sign Language users which total about 500K in the US, and over 70 million worldwide (with other variations of signed languages).
Explore the fundamentals of computer vision.
Implement object detection and gesture recognition in real time.
Objective
Train a neural network to recognize 5 distinct signs that are used in American Sign Language in real time and displays the results to the user.
System Requirements
Train 5 signs with an accuracy of above 50%
Detect in real time the sign that the user presents.
Detect when a user has change a sign.
Detect 2 word sign combinations
Tools and Frameworks:
Tensorflow
Keras - A deep learning framework using a sequential model
Mediapipe - holistic detection of hands, face, and pose positions.
OpenCV
Windows or Mac OS
Data Model
Our system was trained to detect the following 5 signs:
1. Hello
2. No
3. Thank you
4. Yes
5. Class
Demo
Testing results:
We obtained a wide range of accuracy during our testing. Our system is able to detect a user’s sign with an accuracy between 26-90%.
We encountered the following challenges with this system:
· Similarity between signs decreased accuracy.
· Frame rate distorted changes between signs.
· Sign location and movement decreased accuracy.
Code

Comments