Skip to content

take a video from file or a webcam as input, and return the meaning of hand's sign - if it is shown - as tags with it's probabilities.

Notifications You must be signed in to change notification settings

raghd-do/Sign-language-translator

Repository files navigation

Sign language translator

this project will help normal people to understand deaf people by taking a video file or video from webcam and translate the hand's sign in it.

Getting Started

These instructions will get you a copy of the project up and running on your computer for development and testing purposes.

Prerequisites

  1. you need to have an account in azure custom vision for hand's sign classification. if you don't have, click this link to start creating one.
https://azure.microsoft.com/en-us/services/cognitive-services/custom-vision-service/
  1. Python version 3 or later.
https://www.python.org/
  1. OpenCV package for python write this command line in 'cmd' to install it after you install python.
$pip install opencv-python

Fill the requirement

once you clone this project to your computer, please check these lines of code and complete what is asking you before running the program.

line 15: 'Prediction-Key': '<enter your predection kay>',
line 17: 'Ocp-Apim-Subscription-id': '<enter your Subscription id>'
line 22: 'application': '<enter your service name>'
line 25: file_path = "<put your video file_path>"
line 45: conn = http.client.HTTPSConnection('eastus.api.cognitive.microsoft.com') # use your endpoint here as this example
line 46: conn.request("POST", "https://eastus.api.cognitive.microsoft.com/customvision/v3.0/Prediction/<enter your project id>/classify/iterations/<enter your iteratin name>/image?%s"

Running the tests

when you run the code with video file or webcam you git output like this

b'{"id":"**********","project":"**********","iteration":"**********","created":"****-**-*****:**:**.****","predictions":[{"probability":0.791616261,"tagId":"30768361-7f2d-4911-9ac0-961f68c628fd","tagName":"No hand sign"},...

once you done, go and translate a sign language to enrich your service by tagging what you catch.

Built With

  • Anaconda - The World's Most Popular Python/R Data Science Platform
  • Atom - A hackable text editor built by GitHub
  • OpenCV - Open Source Computer Vision Library

Contributing

Please read CONTRIBUTING.md for details on our code of conduct, and the process for submitting pull requests to us.

Authors

Acknowledgments

  • Advanced Electronics Company
  • Engineer.Abdulaziz AlSadhan
  • Engineer.Mohammad AlJabr
  • Dev.Hessa AlZamel

About

take a video from file or a webcam as input, and return the meaning of hand's sign - if it is shown - as tags with it's probabilities.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages