Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add support for nan_to_num, atan2 & bitwise_or op #57

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

kamalrajkannan78
Copy link
Contributor

@kamalrajkannan78 kamalrajkannan78 commented Jan 6, 2025

Summary

  • PETR model uses nan_to_num op here & atan2 op here
  • Support added for those ops.
  • This PR fixes Operator Not Implemented aten::bitwise_or_  tt-forge-fe#982 by mapping "aten::bitwise_or_": to "bitwise_or"
  • our nan_to_num_decomposition needs isnan. existing decomposition of isnan in forge_passes.py didn't detected NaN values. so updated the decomposition with this logic (a NaN value is not equal to itself) which detects NaN values correctly

Logs

@kamalrajkannan78 kamalrajkannan78 force-pushed the kkannan/atan_nan_to_num_support_tvm branch from b02be0c to 9939beb Compare January 6, 2025 18:45
@kamalrajkannan78 kamalrajkannan78 changed the title Add support for nan_to_num op Add support for nan_to_num,atan2 op Jan 6, 2025
@kamalrajkannan78 kamalrajkannan78 force-pushed the kkannan/atan_nan_to_num_support_tvm branch 2 times, most recently from e66c611 to 2924f46 Compare January 7, 2025 10:06
@ashokkumarkannan1 ashokkumarkannan1 changed the title Add support for nan_to_num,atan2 op Add support for nan_to_num, atan2 & bitwise_or op Jan 7, 2025

dtype = input_types[0]

assert dtype == "float32", f"Expected dtype to be float32, but got {dtype}. Support for {dtype} is not added yet."
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

seems implementation supports all float dtypes. can we also add support for int types too? And remove this assert.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  • Since NaN is inherently a concept in floating-point data types, the occurrence of NaN values in integer tensors is highly improbable. Integer tensors do not support NaN values, and thus, it is unnecessary to check for NaN in such tensors during model inference.

  • for float16 & float64, I faced these errors nan_to_num_float16.log , nan_to_num_float64.log

  • Due to above reasons & to push PETR model to next stage , support for float32 dtype alone is added now . will add support for other datatypes if it is needed in future.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For float16, seems issue in ttnn, can we raise an issue in ttnn and enable the support here?
For float64, can we enable the support in Forge?

BTW, This is not blocker for this PR, We can add support to this latter. What do you think @nvukobratTT ?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Even as TTNN doesn't support it, I don't see why we should limit our compiler. Eventually, TTNN should add support for this.

That said, let's test out different data formats, and mark the ones that are unsupported on TTNN side as xfailed. No need to open issues for this right now. Let's just use xfail for tracking these at the moment..

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  • Tests added for float32 & float16

  • support for this float64 data format need to be added from front end itself it seems. nan_to_num_float64.log

  • As adding support for this dtype is not proirity now, will a create a seperate PR for it if needed in future.

python/tvm/relay/frontend/pytorch.py Show resolved Hide resolved
python/tvm/relay/frontend/pytorch.py Outdated Show resolved Hide resolved
@kamalrajkannan78 kamalrajkannan78 force-pushed the kkannan/atan_nan_to_num_support_tvm branch 5 times, most recently from 2917d2e to d951607 Compare January 9, 2025 09:54
@kamalrajkannan78 kamalrajkannan78 marked this pull request as ready for review January 9, 2025 12:56
@@ -4620,6 +4620,67 @@ def scaled_dot_product_attention(self, inputs, input_types):
attn_weight = _op.reshape(attn_weight, newshape=[-4, batch_size, -1, -2])

return attn_weight


def nan_to_num(self, inputs, input_types):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's add function docs.

Same for other functions added in this PR

Copy link
Contributor Author

@kamalrajkannan78 kamalrajkannan78 Jan 10, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As explanations are added as comments, I just added function docs like below. Let me know if any specific format should be followed.

python/tvm/relay/frontend/pytorch.py Show resolved Hide resolved

dtype = input_types[0]

assert dtype == "float32", f"Expected dtype to be float32, but got {dtype}. Support for {dtype} is not added yet."
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Even as TTNN doesn't support it, I don't see why we should limit our compiler. Eventually, TTNN should add support for this.

That said, let's test out different data formats, and mark the ones that are unsupported on TTNN side as xfailed. No need to open issues for this right now. Let's just use xfail for tracking these at the moment..

python/tvm/relay/frontend/pytorch.py Show resolved Hide resolved
@kamalrajkannan78 kamalrajkannan78 force-pushed the kkannan/atan_nan_to_num_support_tvm branch from d951607 to 5d4e3f4 Compare January 10, 2025 16:15
@kamalrajkannan78 kamalrajkannan78 force-pushed the kkannan/atan_nan_to_num_support_tvm branch from 5d4e3f4 to 39215fb Compare January 10, 2025 16:18
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Operator Not Implemented aten::bitwise_or_
3 participants