Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Issues related to the input dimensions of the MLP model #27

Open
wjj19950828 opened this issue Nov 9, 2021 · 3 comments
Open

Issues related to the input dimensions of the MLP model #27

wjj19950828 opened this issue Nov 9, 2021 · 3 comments

Comments

@wjj19950828
Copy link

@merrymercy hi~

In your open source code, the input dimension of the MLP model is 164, which is aligned with Ansor, but in Appendix C of the Tenset paper, the input dimension is set to 324. Did you do anything?
image

Looking forward your reply~

@merrymercy
Copy link
Collaborator

In Appendix C of the paper, we mention that
image

Where the first 164 elements are from the orignal Ansor paper, the additional 324 - 164 = 160 elements are from the workload embedding.
In our current open source code, we don't use LDP anymore. Instead, we use a simpler approach to get the workload embedding. The related code is

def get_workload_embedding(workload_key):
tags = ['max', 'min', 'add', 'Conv2dOutput', 'conv2d_winograd', 'DepthwiseConv2d',
'dense', 'softmax', 'compute(b, i, j)']
dag_str = str(ComputeDAG(workload_key_to_tensors(workload_key)))
vec = [0] * len(tags)
for i, tag in enumerate(tags):
if tag in dag_str:
vec[i] = 1
return vec

"in_dim": 164 + (10 if use_workload_embedding else 0),

@wjj19950828
Copy link
Author

In the paper, the effect of MLP+ranking loss is better than XGB+MSE, but in my experiment, the effect of MLP is not as good as XGB. Do you have any good suggestions for MLP?

@merrymercy
Copy link
Collaborator

What's your experiment setting? The results also depend on the dataset and hyperparameters.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants