Sure! Here's an example of a GitHub README for your repository:
This repository contains Python code for an American Sign Language (ASL) detection project using multiclass classification. The project utilizes YOLO (You Only Look Once) and MobileNetSSD_deploy for object detection. The trained model achieves an accuracy of 91%. The code provides options for users to predict signs from both images and videos.
The dataset used for training and evaluation can be found on Kaggle: ASL Dataset. It consists of a collection of images representing different letters and gestures in American Sign Language.
To run the code in this repository, you'll need the following dependencies:
- Python 3.x
- OpenCV
- TensorFlow
- Keras
- NumPy
- Matplotlib
You can install the required packages using pip
:
pip install opencv-python tensorflow keras numpy matplotlib
- Clone this repository to your local machine:
git clone https://github.com/your-username/your-repository.git
cd your-repository
-
Download the ASL dataset from the provided link and place it in the appropriate directory.
-
Run the script to predict signs from an image:
python predict_image.py --image path/to/your/image.jpg
- Run the script to predict signs from a video:
python predict_video.py --video path/to/your/video.mp4
Make sure to replace path/to/your/image.jpg
and path/to/your/video.mp4
with the actual paths to your desired image and video files, respectively.
The trained model achieves an accuracy of 91% on the ASL dataset. You can modify the code and experiment with different architectures or hyperparameters to potentially improve the performance.
- The ASL dataset used in this project was sourced from Kaggle: ASL Dataset.
This project is licensed under the MIT License.