Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improving speed of Pointpillars #3

Open
xavidzo opened this issue Mar 27, 2021 · 0 comments
Open

Improving speed of Pointpillars #3

xavidzo opened this issue Mar 27, 2021 · 0 comments

Comments

@xavidzo
Copy link

xavidzo commented Mar 27, 2021

Hello @AutoVision-cloud, thanks for the release of your nice work.

I have a question regarding the speed of Pointpillars. Since you determined that Pointpillars-FSA (or -DSA) runs with nearly half the amount of G-FLOPs, I assume this means your model should have a faster inference speed than the baseline...
if my assumption is correct, do you know how much faster can it run?
I tested Pointpillars-FSA with custom data and compared the speed with base Pointpillars, but didn't see any difference in speed....
To me it's not clear how Pointpillars-FSA would perform less G-FLOPs if you are adding extra layers to the baseline, con you please explain? Or for faster speed should I pass only the context features to the BEVEncoder, and ignore the pillar features, thus avoiding concatenation of pillar and context features?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant