-
Notifications
You must be signed in to change notification settings - Fork 18
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
aligning the face #19
Comments
The difference is subtle. |
For some faces which is almost +90 degree turned sometimes alignment make face upside down
… On 26 Dec 2023, at 13:03, Q-engineering ***@***.***> wrote:
The difference is subtle.
In warpAffine() is *face_aligned a [2x3] matrix, as where in warpPerspective, aligned a [3x3] matirx is. For more info see the OpenCV docs <https://docs.opencv.org/3.4/da/d6e/tutorial_py_geometric_transformations.html#:~:text=warpAffine%20takes%20a%202x3%20transformation,3x3%20transformation%20matrix%20as%20input.>.
—
Reply to this email directly, view it on GitHub <#19 (comment)>, or unsubscribe <https://github.com/notifications/unsubscribe-auth/AEFRZH3O2TPQEIE4CHUNJPTYLKOGJAVCNFSM6AAAAABBDC4IC2VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTQNRZGQZDKNRQGI>.
You are receiving this because you authored the thread.
|
If not implemented, try to get a check on the rotation before acceptance. |
I see. |
Hi,
In Pythonn vesion I never came cross such upside down rotation (180 degree)
PS: what you recommend as a model for rpi zero w 2 for human detection ? Try to be below 35usd cost wise
Best
… On 27 Dec 2023, at 12:46, Q-engineering ***@***.***> wrote:
I see.
The key to solve this issue is the aligned [3x3] matirx.
The factors determine how the image is transformed.
Investigate the factors. It is relatively simple to detect a large rotation.
The OpenCV docs <https://docs.opencv.org/3.4/da/d6e/tutorial_py_geometric_transformations.html#:~:text=warpAffine%20takes%20a%202x3%20transformation,3x3%20transformation%20matrix%20as%20input.> give you more informations.
—
Reply to this email directly, view it on GitHub <#19 (comment)>, or unsubscribe <https://github.com/notifications/unsubscribe-auth/AEFRZHYEA7Y33J7JF5CSYKLYLPU5ZAVCNFSM6AAAAABBDC4IC2VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTQNZQGEZTANRZGQ>.
You are receiving this because you authored the thread.
|
@Qengineering |
Tip: You: Can you calculate in a C++ program using the OpenCV function cv::warpPerspective(SmallFrame, aligned, m, cv::Size(96, 112), cv::INTER_LINEAR); a rotation angle of the image in degrees? Answer ChatGTP: #include <opencv2/opencv.hpp>
int main() {
// Read the original image
cv::Mat SmallFrame = cv::imread("path/to/your/image.jpg");
// Define the transformation matrix (you should have this matrix from somewhere)
cv::Mat m = (cv::Mat_<double>(3, 3) << 1, 0, 0, 0, 1, 0, 0, 0, 1);
// Perform the perspective transformation
cv::Mat aligned;
cv::warpPerspective(SmallFrame, aligned, m, cv::Size(96, 112), cv::INTER_LINEAR);
// Calculate rotation angle
double angle = atan2(m.at<double>(1, 0), m.at<double>(0, 0)) * 180.0 / CV_PI;
// Print the rotation angle
std::cout << "Rotation angle: " << angle << " degrees." << std::endl;
return 0;
} In this code, atan2 is used to calculate the angle from the transformation matrix elements. The rotation angle is stored in the variable angle and then printed. Note that this assumes the transformation matrix represents only a rotation and not a more complex transformation. If your transformation matrix includes other elements, you may need a more sophisticated approach to extract rotation information. |
Where are we heading.. These AI …
Or humanity is soooo low intellectuality so we are impressed by AI ?
Thnx
… On 27 Dec 2023, at 20:17, Q-engineering ***@***.***> wrote:
Tip:
You: Can you calculate in a C++ program using the OpenCV function cv::warpPerspective(SmallFrame, aligned, m, cv::Size(96, 112), cv::INTER_LINEAR); a rotation angle of the image in degrees?
Answer ChatGTP:
Certainly! To calculate the rotation angle from a perspective transformation matrix, you can use the following C++ code. This assumes that the transformation matrix m is a 3x3 matrix obtained from the cv::getPerspectiveTransform function or some other method.
#include <opencv2/opencv.hpp>
int main() {
// Read the original image
cv::Mat SmallFrame = cv::imread("path/to/your/image.jpg");
// Define the transformation matrix (you should have this matrix from somewhere)
cv::Mat m = (cv::Mat_<double>(3, 3) << 1, 0, 0, 0, 1, 0, 0, 0, 1);
// Perform the perspective transformation
cv::Mat aligned;
cv::warpPerspective(SmallFrame, aligned, m, cv::Size(96, 112), cv::INTER_LINEAR);
// Calculate rotation angle
double angle = atan2(m.at<double>(1, 0), m.at<double>(0, 0)) * 180.0 / CV_PI;
// Print the rotation angle
std::cout << "Rotation angle: " << angle << " degrees." << std::endl;
return 0;
}
In this code, atan2 is used to calculate the angle from the transformation matrix elements. The rotation angle is stored in the variable angle and then printed. Note that this assumes the transformation matrix represents only a rotation and not a more complex transformation. If your transformation matrix includes other elements, you may need a more sophisticated approach to extract rotation information.
—
Reply to this email directly, view it on GitHub <#19 (comment)>, or unsubscribe <https://github.com/notifications/unsubscribe-auth/AEFRZH4RXE7EMYD5NLVLU7TYLRJYZAVCNFSM6AAAAABBDC4IC2VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTQNZQGQ4DQMJZGE>.
You are receiving this because you authored the thread.
|
Believe me, I have the same feelings towards AI as you. Hesitated a lot. |
Quick note:
is not producing right result unless you change to double to float.
by the way I try to use yolov5 cpp face detection (the version which has not accuracy percentage return) . |
I haven't any experience of YoloV7 or V8 with face detection. |
Yolov5 face detector also has landmarks with it 5 point.
… On 28 Dec 2023, at 22:22, Q-engineering ***@***.***> wrote:
I haven't any experience of YoloV7 or V8 with face detection.
Please note that not only the faces need to be detected, but also the position of the eyes, nose and mouth.
I'm not sure if the face detectors based on Yolo feature this.
—
Reply to this email directly, view it on GitHub <#19 (comment)>, or unsubscribe <https://github.com/notifications/unsubscribe-auth/AEFRZHZBIX7R4KUXGE25NDLYLXBH7AVCNFSM6AAAAABBDC4IC2VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTQNZRGQZTGOJQGE>.
You are receiving this because you authored the thread.
|
OK.. You could try it. However, YoloV5 might be a lot slower. (??) |
What is your suggestion ? Retinaface ?
… On 28 Dec 2023, at 23:28, Q-engineering ***@***.***> wrote:
OK.. You could try it. However, YoloV5 might be a lot slower. (??)
—
Reply to this email directly, view it on GitHub <#19 (comment)>, or unsubscribe <https://github.com/notifications/unsubscribe-auth/AEFRZH3IA7KEO5IB6VWOO4LYLXI53AVCNFSM6AAAAABBDC4IC2VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTQNZRGQ3TAMBUHE>.
You are receiving this because you authored the thread.
|
As shown in the README, there are two options implemented. MTCNN or Retinaface. The later somewhat slower. |
Hi, @Qengineering
what I noticed that you are aligning the face in TWrap:
I saw from ncnn repo:
[
cv::warpAffine(img_src.clone(), *face_aligned, transfer_mat, cv::Size(112, 112), 1, 0, 0);)
is this make a huge diference ? you are usingb (cv::Size(96, 112)) after that resing , below one ise directly 112,112
would you mind to look at it ?
The text was updated successfully, but these errors were encountered: