-
Notifications
You must be signed in to change notification settings - Fork 3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Scalars in finch-tensor
#25
Comments
Paraphrasing my comments from Slack -- We should go with the first option, partially because that's the only backwards-compatible option. The concerns about a new kernel for each background value can be handled via a hybrid approach: We compile a new kernel iff something is an identity or an absorbing value of a registered function; otherwise we resort to treating the background value as a runtime value rather than compile-time constant. |
Oh, that's interesting! So we would compile for 1, 0, inf, -inf. This seems like a very doable compromise, especially if we can add other "special" values to the compiler. We may want to do this in JuliaLand as well when we do broadcasts. `A .+ 1` has the same issue in Julia
|
We should also figure out syntax to overload/avoid this behavior. If the default is to promote some values to constants, what is the way that we declare constant and what is the way we declare variable? |
I just wanted to make a heads up: |
A thought: let's produce a "scalar" constructor which dispatches to either a |
Hi @willow-ahrens @hameerabbasi,
This issue is meant to discuss and decide the approach we would like to take in terms of handling scalars.
From existing discussion, when calling a function on a tensor and a scalar (
Tensor(...) + 1
), the scalar could be wrapped in aTensor(1)
and interpreted as:1
,0
, filled with1
,Tensor(Dense(Element([1])))
.The text was updated successfully, but these errors were encountered: