Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add support for clamp op #1093

Open
wants to merge 1 commit into
base: main
Choose a base branch
from
Open

Add support for clamp op #1093

wants to merge 1 commit into from

Conversation

mmanzoorTT
Copy link
Contributor

@mmanzoorTT mmanzoorTT commented Oct 28, 2024

  • Add end-to-end implementation of the ops.
  • Add stablehlo to ttir conversion for clamp op.

Copy link
Contributor

@tapspatel tapspatel left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

add a clamp and clip test for perf_unit under Silicon/TTNN/perf_unit

@mmanzoorTT
Copy link
Contributor Author

add a clamp and clip test for perf_unit under Silicon/TTNN/perf_unit

@tapspatel tests added. thanks

Copy link
Contributor

@tapspatel tapspatel left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

flatbuffer files and .mlir files look good!

@mmanzoorTT mmanzoorTT mentioned this pull request Oct 30, 2024
@sdjordjevicTT
Copy link
Contributor

sdjordjevicTT commented Oct 30, 2024

Clip is just an alias for clamp op, do we want both ops in TTIR?

@mmanzoorTT
Copy link
Contributor Author

Clip is just an alias for clamp op, do we want both ops in TTIR?

I added it to have one-to-one mapping between TTIR and TTNN. I can remove ttir.clip op.

Comment on lines +701 to +861
Example:
min: 2.000000+00
input: [[0, 1, 2, 3, 4, 5, 6, 7]]
max: 5.000000+00
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Doesn't stablehlo.clamp support more general form of this op where all three are tensors?
https://openxla.org/stablehlo/spec#clamp

Copy link
Contributor Author

@mmanzoorTT mmanzoorTT Oct 30, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, stablehlo.clamp uses tensor operands for all inputs. However ttnn.clamp uses floats for min and max. If we can determine single value for min and max tensor (e.g. min and max tensors are constant splat vectors). Then I am using these single values as floats and lowering stablehlo.clamp to ttir.clamp and then to ttnn.clamp. If a single value can not be determined then stablehlo.clamp is lowered to sequence of ttir.minimum and ttir.maximum ops as below.

output = ttnn.minimum(ttnn.maximum(input , min_tensor), max_tensor)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hm, interesting. I had the same problem with power op #1094.

@mrakitaTT since stablehlo.clamp defines args like this

  let arguments = (ins
    HLO_Tensor:$min, /*clamp_i1*/
    HLO_Tensor:$operand, /*clamp_c3, clamp_i2*/
    HLO_Tensor:$max /*clamp_i3*/
  );
  let results = (outs HLO_Tensor:$result);

should TTIR have 1:1 matching arg interface or is

    let arguments = (ins AnyRankedTensor:$input,
                         AnyRankedTensor:$output,
                         F32Attr:$min,
                         F32Attr:$max,
                         TT_OperandConstraintArrayAttr:$operand_constraints);

ok, considering what Asif said above?

@sdjordjevicTT
Copy link
Contributor

Clip is just an alias for clamp op, do we want both ops in TTIR?

I added it to have one-to-one mapping between TTIR and TTNN. I can remove ttir.clip op.

I am not blocking, just wondering what would be the best solution. I saw that TTNN also aliases one op to another, hence maybe we should keep them both...

@sdjordjevicTT
Copy link
Contributor

We agreed to keep a single op per the discussion here:
#852 (comment)

Please keep only the clamp and remove the clip op from this PR.

@mmanzoorTT
Copy link
Contributor Author

We agreed to keep a single op per the discussion here: #852 (comment)

Please keep only the clamp and remove the clip op from this PR.

@sdjordjevicTT Do you mean to remove ttir.clip op only OR remove it entirely (ttir.clip op along with ttnn.clip and flatbuffer/runtime support)?

@sdjordjevicTT
Copy link
Contributor

We agreed to keep a single op per the discussion here: #852 (comment)
Please keep only the clamp and remove the clip op from this PR.

@sdjordjevicTT Do you mean to remove ttir.clip op only OR remove it entirely (ttir.clip op along with ttnn.clip and flatbuffer/runtime support)?

We synced over the slack and agreed to remove the ttir.clip op entirely.

@mmanzoorTT
Copy link
Contributor Author

@sdjordjevicTT clip op removed entirely.

@mmanzoorTT mmanzoorTT changed the title Add support for clamp and clip op Add support for clamp op Nov 5, 2024
Copy link
Contributor

@jnie-TT jnie-TT left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Runtime changes look good!

* Add end-to-end implementation of the ops.
* Add stablehlo to ttir conversion for clamp op.
@@ -850,6 +850,33 @@ def TTIR_UnsqueezeOp : TTIR_DPSOp<"unsqueeze"> {
let hasVerifier = 1;
}

def TTIR_ClampOp : TTIR_DPSOp<"clamp"> {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we add some verifier for the op? For example, should we check that the output shape is the same as the input shape?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants