Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

result in ucf101 #11

Open
qichenghan666 opened this issue Jul 26, 2019 · 13 comments
Open

result in ucf101 #11

qichenghan666 opened this issue Jul 26, 2019 · 13 comments

Comments

@qichenghan666
Copy link

I trained the model in ucf101, after 70 epoch, the top 1 accuracy is 73.3%, the top 5 accuracy is 95.7% in training data, is this a normal result?

@HorusMaster
Copy link

I trained the model in ucf101, after 70 epoch, the top 1 accuracy is 73.3%, the top 5 accuracy is 95.7% in training data, is this a normal result?

Hi, I looks good acoording to the paper but when using inference how did it work?

@tomanick
Copy link

tomanick commented Sep 17, 2019

I also trained the model on ucf101 dataset. At first, I used the default hyperparameters which the author gave. Then I got the 73.50% of top 1 accuracy. Afterwards, I changed the learning schedule from StepLR to ReduceLROnPlateau and hyperparameters such as learning rates, weight decay, and so on. The top 1 accuracy became about 94%.

Therefore, I think we should use our own hyperparameters and learning schedule so that we could get better results on our own datasets.

@patykov
Copy link

patykov commented Sep 17, 2019

@tomanick how did you search for the hyperparameters? Would you mind sharing the ones that have worked for you?

@tomanick
Copy link

@patykov
Maybe there was no standard answer for hyperparameters. I often found the better hyperparameters by trial and error. On UCF101 dataset, I used learning rate=0.001, weight decay = 0.0005, batch size=16, and clip length=64. The learning schedule which I used was ReduceLROnPlateau. In addition, I also found used AdamW as the optimizer could get better results than SGD sometimes.

Finally, maybe there would be much better hyperparameters than I used. Just give it a try and maybe you would find a surprise. Good luck!

@MStumpp
Copy link

MStumpp commented Sep 21, 2019

@tomanick did you find a way to set the temporal stride τ used on input frames? in the paper they used values equal to 16, 8, ...

@sdjsngs
Copy link

sdjsngs commented Oct 12, 2019

@MStumpp do you want to set the temporal stride like 16, 8 ? you can look into slowfastnet.py/class slowfast / forward find code like this slow = self.SlowPath(input[:, :, ::16, :, :], lateral) input is a 5D tensor which have dim like [batch ,channel,temporal, h,w] ,so once you have a input with 64 frames, the temporal axis is 64 and you can use ::16 ,that means you choose 4 frames in 64 frames with stride 16

@anuar12
Copy link

anuar12 commented Oct 16, 2019

@tomanick @qichenghan666 @HorusMaster Did you guys try to validate this model on datasets from the paper (Kinetics, Charades or anything else)? Does the accuracy look representative of the paper?

@luwanglin
Copy link

I trained the model in ucf101, after 70 epoch, the top 1 accuracy is 73.3%, the top 5 accuracy is 95.7% in training data, is this a normal result?

What is the accuracy of your validation sets, I hope you can tell me, thank you

@sdjsngs
Copy link

sdjsngs commented May 7, 2020

@tomanick The top 1 accuracy became about 94% is the train accuracy for training from scratch ?
And since you use ReduceLROnPlateau as your lr_schedular, how did you get the validation set from ucf101? The offical splits are three train/test splits, Or you just give the train_ loss to the ReduceLROnPlateau? i used did a wrong split to get train/validation/test and the test accuracy is 91% training from scratch , and once i fix this spilt the top-1 accuracy is just 65, which is close to the facebook-slowfast i sincerely look forward to your reply !!!

@yangjinghit
Copy link

@HorusMaster
did you get the inference code at last ?such as how to detect a small
single video?

@Weed-eat-meat
Copy link

I trained the model in ucf101, after 70 epoch, the top 1 accuracy is 73.3%, the top 5 accuracy is 95.7% in training data, is this a normal result?
I want to confirm your experimental results which is the experiment of ucf101 training from scratch on slowfast, As you said "after 70 epoch, the top 1 accuracy is 73.3%, the top 5 accuracy is 95.7% in training data", which author's implementation are you using , have any changes been made?

@wangpanpan666
Copy link

I trained the model in ucf101, after 70 epoch, the top 1 accuracy is 73.3%, the top 5 accuracy is 95.7% in training data, is this a normal result?
I want to confirm your experimental results which is the experiment of ucf101 training from scratch on slowfast, As you said "after 70 epoch, the top 1 accuracy is 73.3%, the top 5 accuracy is 95.7% in training data", which author's implementation are you using , have any changes been made?

用好一点的显卡,然后精度就提高了哈哈哈哈

@vsumm
Copy link

vsumm commented Aug 26, 2022

Hai @wangpanpan666 @r1c7

after completion of training on how to test model ???

can anyone provide an inference code it's very useful for me

Thanks

Repository owner deleted a comment from SASECOMPANY Jan 15, 2024
Repository owner deleted a comment Feb 2, 2024
Repository owner deleted a comment from SASECOMPANY Mar 7, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests