-
Notifications
You must be signed in to change notification settings - Fork 69
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Inference on LineMOD dataset doesn't work #62
Comments
Hi @saeedEnte, it sounds like something goes wrong during inference because your assumptions seem to be correct. EfficientPose uses rotations in the axis angle format instead of a rotation matrix. Therefore it produces rotation vectors of length 3 as you mentioned and not 3*3 rotation matrices. The GT rotations are converted from matrix to axis angle vector in the generator when loading the dataset annotations. For example in generators/linemod.py line 433: |
Thank you for the detailed answer. I have used the provided |
Did you also adjust the |
I have tried the 8th folder with "dirller" (as I have seen that you commented in an issue that 8th object is driller) But still did not work. (With the network trained on 8th Object.) The dictionray should always have the key |
Hi,
thank you @ybkscht for the impelementation. It is clear and good documented. The problem that I have:
I download a trained network whose link have been provided in ReadMe (I used the object-15 folder). Considering that object-15 means that the network has been trained on the 15th folder of data in processed-LineMOD, I have tried to take an inference from an rgb image from that sub-directory. The model produces NaN for the rotation and translation head. The detection is also not that meaningful (100 boxes have been detected with the confidence of 1.0)
I have tried several objects to be detected. For the camera intrinsic, I suppose that the method in
inference.py
delivers the right parameters.Anohter question that I have, the implementation (cost-function, more precisely) takes rotation and translation parameters. What is exactly the dimension of the rotation parameter? In the
gt.yml
it is a 3*3 matrix but the model produces vector of length 3 for each box. I cannot follow the logic of the dimensions here.Thanks in advance for your answer.
The text was updated successfully, but these errors were encountered: