![Working Video](link-to-the-video)
This repository contains a real-time traffic analysis system leveraging YOLOv8 for object detection and unique vehicle tracking. The system is capable of analyzing live video feeds, assigning unique IDs to detected vehicles, plotting their trajectories, and counting vehicles entering specific zones or lanes.
- Real-time Object Detection: Utilizes YOLOv8 for efficient and accurate detection of vehicles in live video feeds.
- Unique Vehicle Tracking: Employs ByteTrack for tracking each vehicle uniquely across frames.
- Trajectory Plotting: Plots the trajectories of vehicles to visualize their paths.
- Zone-based Counting: Counts the number of vehicles entering specific zones or lanes.
- Supervision: Integrates with Supervison for enhanced monitoring and visualization.
-
clone repository and navigate to current directory
git clone https://github.com/Surajpatra700/Traffic_Analysis.git cd Traffic Analysis
-
setup python environment and activate it [optional]
python3 -m venv venv source venv/bin/activate
-
install required dependencies
pip install -r requirements.txt
-
download
traffic_analysis.pt
andtraffic_analysis.mov
files./setup.sh
-
ultralytics
-
--source_weights_path
: Required. Specifies the path to the YOLO model's weights file, which is essential for the object detection process. This file contains the data that the model uses to identify objects in the video. -
--source_video_path
: Required. The path to the source video file that will be analyzed. This is the input video on which traffic flow analysis will be performed. -
--target_video_path
(optional): The path to save the output video with annotations. If not specified, the processed video will be displayed in real-time without being saved. -
--confidence_threshold
(optional): Sets the confidence threshold for the YOLO model to filter detections. Default is0.3
. This determines how confident the model should be to recognize an object in the video. -
--iou_threshold
(optional): Specifies the IOU (Intersection Over Union) threshold for the model. Default is 0.7. This value is used to manage object detection accuracy, particularly in distinguishing between different objects.
-
-
inference
-
--roboflow_api_key
(optional): The API key for Roboflow services. If not provided directly, the script tries to fetch it from theROBOFLOW_API_KEY
environment variable. Follow this guide to acquire yourAPI KEY
. -
--model_id
(optional): Designates the Roboflow model ID to be used. The default value is"vehicle-count-in-drone-video/6"
. -
--source_video_path
: Required. The path to the source video file that will be analyzed. This is the input video on which traffic flow analysis will be performed. -
--target_video_path
(optional): The path to save the output video with annotations. If not specified, the processed video will be displayed in real-time without being saved. -
--confidence_threshold
(optional): Sets the confidence threshold for the YOLO model to filter detections. Default is0.3
. This determines how confident the model should be to recognize an object in the video. -
--iou_threshold
(optional): Specifies the IOU (Intersection Over Union) threshold for the model. Default is 0.7. This value is used to manage object detection accuracy, particularly in distinguishing between different objects.## ⚙️ run
-
-
ultralytics
python ultralytics_example.py \ --source_weights_path data/traffic_analysis.pt \ --source_video_path data/traffic_analysis.mov \ --confidence_threshold 0.3 \ --iou_threshold 0.5 \ --target_video_path data/traffic_analysis_result.mov
-
inference
python inference_example.py \ --roboflow_api_key <ROBOFLOW API KEY> \ --source_video_path data/traffic_analysis.mov \ --confidence_threshold 0.3 \ --iou_threshold 0.5 \ --target_video_path data/traffic_analysis_result.mov
The following is a demonstration of the system in action:
![Working Video](link-to-the-video)
Key Results:
- Real-time detection and aerial view tracking of vehicles.
- Visualization of vehicle trajectories.
- Accurate counting of vehicles entering specific zones.
This project is licensed under the MIT License. See the LICENSE file for details.