Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Lowered transpose op; working training of matmul. #158

Merged
merged 1 commit into from
Aug 30, 2024

Conversation

vladimirjovanovicTT
Copy link
Contributor

@vladimirjovanovicTT vladimirjovanovicTT commented Aug 22, 2024

Lowered transpose op, added test.
Modified training to support matmul.

#23 #21 #15

pybuda/csrc/buda_passes.cpp Show resolved Hide resolved

def forward(self, a, b):
c = a + b
return torch.transpose(c, 1, 2)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we remove the add operation, and just transpose the single input? Just to be inline with the theme in this test file...

@@ -7,51 +7,52 @@

import pybuda
import pybuda.config
from pybuda.op.eval.common import compare_with_golden_pcc

def test_torch_training():
class MultParam(nn.Module):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Rename to MatMulParam, for example...

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

sure!

golden = model(inputs)
output = tt_model(inputs)

if not torch.allclose(output[0], golden, rtol=1e-1):
raise ValueError("Output does not match the golden output")
#print(f"golden = {golden}, output = {output[0]}")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please remove the prints which were commented out.

if not torch.allclose(output[0], golden, rtol=1e-1):
raise ValueError("Output does not match the golden output")
#print(f"golden = {golden}, output = {output[0]}")
oputput = [co.to("cpu") for co in output]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

output*

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

oops, tnx

pybuda/csrc/buda_passes.cpp Show resolved Hide resolved

def forward(self, a, b):
c = a + b
return torch.transpose(c, 1, 2)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we add more test cases? E.g. to test out different transpose dims, different shapes (e.g. 7, 41, etc.) and also potentially different ranks (e.g. 2, 3 and 4).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Definitely makes sense. Also should apply to other ops. test_ops.py will blow up in size soon; we should consider separating it into multiple files.

pybuda/test/mlir/test_training.py Show resolved Hide resolved
@vladimirjovanovicTT vladimirjovanovicTT force-pushed the vladimirjovanovic/lower_transpose_op branch 3 times, most recently from 2d6b5f0 to e3e033f Compare August 23, 2024 14:05
Improved tests, fixed bug in tests.

Cleaned up training test.
@vladimirjovanovicTT vladimirjovanovicTT force-pushed the vladimirjovanovic/lower_transpose_op branch from e3e033f to 40baa5c Compare August 30, 2024 10:46
@vladimirjovanovicTT vladimirjovanovicTT merged commit 50251d3 into main Aug 30, 2024
3 checks passed
@pilkicTT pilkicTT deleted the vladimirjovanovic/lower_transpose_op branch September 2, 2024 14:09
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants