-
Notifications
You must be signed in to change notification settings - Fork 3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Einsum in python #16
Comments
While d += a * b + c # First two are identical
finch.max(c, a + b, out=c)
f(c, a + b, out=c) The main reason is that the array API standard doesn't contain
|
For a discussion on existing ways to handle laziness in Python, see: pydata/sparse#618 |
That's okay, we'll work on the python side of this later! We wanted to agree on something that seemed doable because there was interest from the julia side in einsum and we were designing things to ensure that python wouldn't be locked out of it. |
Kyle or I can tackle this after my next paper submission |
@hameerabbasi , I'm curious how we can represent the following in the tensor API, it's important in graph kernels:
|
Something like An assumption made here is that A, B, C are all 2D and stored in the order |
okay. I still think the einsum is much clearer here, but I do see how this could eventually work through the fused interface, we would need indexing to be lazy too, which might be farther off. |
In addition or instead of supporting np.einsum, it was agreed that finch should have a more flexible einsum interface. It would look like this:
allocating scaled matrix add
in-place scaled matrix add
max-plus matmul
etc. If you want to use your own function you could pass it in
we would target finch-tensor/Finch.jl#428 and probably utilize https://github.com/lark-parser/lark or https://ply.readthedocs.io/en/latest/ or a custom recursive descent parser to parse the string.
@mtsokol @kylebd99
The text was updated successfully, but these errors were encountered: