Skip to content

NathanQ2/HybridStereoAutonomousVehicleSystem

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

19 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Hybrid Stereo Autonomous Vehicle System

Road sign pose estimation for autonomous vehicles.

Installation

1 - Requirements

  1. Cmake
  2. Python 3.12+ (You probably will be able to get away with an older version of Python, but I've been developing on 3.12. pip is also required.)
  3. Git
  4. CP210x USB to UART Bridge VCP Drivers
  5. PyTorch (Ultralytics will automatically install PyTorch but installing manually is recommended)
  6. Ultralytics (Ultralytics will automatically be installed by setup scripts)

2 - Clone the Repository

git clone --recursive https://github.com/NathanQ2/HybridStereoAutonomousVehicleSystem.git

3 - Setup Virtual Environment

A virtual environment is generally recommended to prevent package version conflicts with other projects. Note: on Linux, you may need to install the python venv package. (Debian Ex: sudo apt install python3.10-venv)

sudo apt-get update && sudo apt-get upgrade -y
sudo apt install python[VERSION]-venv

3.1 - Windows/Mac/Linux

python3 -m venv venv

4 - Build RP_LiDAR_Interface_Cpp

4.1 - Download and Install the Appropriate Drivers

The CP210x USB to UART Bridge VCP Drivers are required to communicate with the LiDAR over USB.

4.2 - Build Using Cmake

cd HybridStereoAutonomousVehicleSystem
cd vendor/RP_LiDAR_Interface_Cpp
mkdir build
cd build
cmake ..
cmake --build .
cd ../../../

5 - Install Required Python Packages

python3 -m pip install -r requirements.txt

6 - Training the Model

At the moment, training configuration parameters can be edited in the train.py file. I trained with epochs=1000, device=[0], batch=-0.90 and got alright results.

python3 src/test/train.py

7 - Run the main.py File!

7.1 - Windows

python3 src/main/main.py [lidar com port Ex: com3]

7.2 - Linux

python3 src/main/main.py [lidar com port Ex: /dev/ttyUSB0]

7.3 - Mac

python3 src/main/main.py [lidar com port Ex: /dev/ttySLAB_USBtoUART]

How It Works

Hardware

I chose to use two generic usb webcams in a stereo configuration with a baseline of ~190.5 millimeters. This allows us to do basic depth estimation using the difference between the two images. Camera calibration constants can be found in the rightCameraProperties.json and the leftCameraProperties.json files respectively. The other json files were generated by calibdb.net. I've also chosen to use a SLAMTEC RP LiDAR A1 rotating LiDAR for more accurate depth perception when available. Setup Without LiDAR Setup With LiDAR

Software

Sign Detection

Sign detection is done using a pre-trained YOLOv8 model for each camera.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages