-
Notifications
You must be signed in to change notification settings - Fork 5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Lowered transpose op; working training of matmul. #158
Lowered transpose op; working training of matmul. #158
Conversation
pybuda/test/mlir/test_ops.py
Outdated
|
||
def forward(self, a, b): | ||
c = a + b | ||
return torch.transpose(c, 1, 2) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we remove the add operation, and just transpose the single input? Just to be inline with the theme in this test file...
pybuda/test/mlir/test_training.py
Outdated
@@ -7,51 +7,52 @@ | |||
|
|||
import pybuda | |||
import pybuda.config | |||
from pybuda.op.eval.common import compare_with_golden_pcc | |||
|
|||
def test_torch_training(): | |||
class MultParam(nn.Module): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Rename to MatMulParam
, for example...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
sure!
pybuda/test/mlir/test_training.py
Outdated
golden = model(inputs) | ||
output = tt_model(inputs) | ||
|
||
if not torch.allclose(output[0], golden, rtol=1e-1): | ||
raise ValueError("Output does not match the golden output") | ||
#print(f"golden = {golden}, output = {output[0]}") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please remove the prints which were commented out.
pybuda/test/mlir/test_training.py
Outdated
if not torch.allclose(output[0], golden, rtol=1e-1): | ||
raise ValueError("Output does not match the golden output") | ||
#print(f"golden = {golden}, output = {output[0]}") | ||
oputput = [co.to("cpu") for co in output] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
output*
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
oops, tnx
pybuda/test/mlir/test_ops.py
Outdated
|
||
def forward(self, a, b): | ||
c = a + b | ||
return torch.transpose(c, 1, 2) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we add more test cases? E.g. to test out different transpose dims, different shapes (e.g. 7, 41, etc.) and also potentially different ranks (e.g. 2, 3 and 4).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Definitely makes sense. Also should apply to other ops. test_ops.py will blow up in size soon; we should consider separating it into multiple files.
2d6b5f0
to
e3e033f
Compare
Improved tests, fixed bug in tests. Cleaned up training test.
e3e033f
to
40baa5c
Compare
Lowered transpose op, added test.
Modified training to support matmul.
#23 #21 #15