Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Llama 3.2 1B - Add specific op tests #622

Open
wants to merge 3 commits into
base: main
Choose a base branch
from

Conversation

mstojkovicTT
Copy link
Contributor

In preparation for the rollout of Llama 3.2 1B, I have implemented a series of tests to assess our current support infrastructure's readiness. These tests are designed to ensure we meet the operational requirements necessary for a successful model bring-up.

fixes #533

def test_add(shapes):

if shapes[0] != shapes[1]:
pytest.xfail("eltwise_add broadcast not supported")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For all xfail tests, can you create issues on MLIR within this milestone, and attach generated TTIRs as well? For easier tracking, let's also create reminder issues to remove those xfails once MLIR issues are fixed. Ofc, we can use the "Blocker" filed to track which issues from FFE is waiting some other issues on MLIR side :))

Let's create separate issues for each op type (e.g. add, cosine, etc).

Note: This is not a blocker for merging this PR :))

)
def test_concat(inputs_and_dim):
in_shape1, in_shape2, dim = inputs_and_dim
print(in_shape1, in_shape2, dim)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's remove these prints, seems redundant as we can get shape info from test logs as well :))

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oops, i left those from testing, removed now



@pytest.mark.parametrize("shapes", [(1, 11, 64)])
@pytest.mark.xfail(reason="Cosine not supported in TTIR")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As cos and sin are now supported, can we do a rebase on top of the latest main and check if they work?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fixed!

((1, 11), 128256, 2048),
],
)
@pytest.mark.xfail(reason="ttnn.embedding op fails while reshaping the input_tensor in TILE_LAYOUT")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this still the case? The latest MLIR uplift should have solved this one. Let's double-check before proceeding.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As far as i know, it still has the same issue. Maybe i am missing something, but the test still fails


if source_shape != target_shape or source_and_target_shape == ((1, 32, 11, 64), (1, 32, 11, 64)):
pytest.xfail(
"Unable to reshape a tensor in TILE_LAYOUT to non-tile height and width! Please convert the tensor to ROW_MAJOR_LAYOUT first."
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's check on latest main if this one is fixed as well

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same as embedding it seems

source_shape, target_shape = source_and_target_shape

if len(source_shape) != 4 or len(target_shape) != 4:
pytest.xfail("Reshape for dim != 4: Unhandled attribute type")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this issue on FFE side?

((1, 8, 11, 64), 2),
],
)
@pytest.mark.xfail(reason="TTNN: Tensor layout issues with non tile dim aligned shapes")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's also check latest main, should be solved

@mstojkovicTT mstojkovicTT added this to the [FFE - E2E] Llama 3.2 1B milestone Nov 7, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Llama 3.2 1B - Add specific op tests
2 participants