In this project readers will learn how to create a standard real-time project using OpenCV (for desktop), and how to perform a new method of marker-less augmented reality, using the actual environment as the input instead of printed square markers. It covers some of the theory of marker-less AR and show how to apply it in useful projects. Please contact if you need professional marker-less AR project with the super high accuracy!
See the related Medium post for more information!
- MarkerlessAR_V1
- MarkerlessAR_V2
- When my OpenGL code works :D
TODOs:
- Fixing the performance issues:
- Separate "detection" and "tracking" in 2 threads.
- Once the target image is detected just track the keypoints using sparse optical flow (calcOpticalFlowPyrLK) and compute camera pose (solvePnp) instead of performing feature detection and matching on every frame. The feature detection will be performed again when tracking is lost on most of the keypoints.
- Dimensionality reduction will be performed on key points to make pattern detector more robust.
If you use this code for your publications, please cite it as:
@ONLINE{mar,
author = "Ahmet Özlü",
title = "Marker-less Augmented Reality with OpenCV and OpenGL",
year = "2018",
url = "https://github.com/ahmetozlu/augmented_reality"
}
Ahmet Özlü
This system is available under the MIT license. See the LICENSE file for more info.