Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Scalars in finch-tensor #25

Open
mtsokol opened this issue Mar 26, 2024 · 5 comments
Open

Scalars in finch-tensor #25

mtsokol opened this issue Mar 26, 2024 · 5 comments
Labels
enhancement New feature or request

Comments

@mtsokol
Copy link
Member

mtsokol commented Mar 26, 2024

Hi @willow-ahrens @hameerabbasi,

This issue is meant to discuss and decide the approach we would like to take in terms of handling scalars.

From existing discussion, when calling a function on a tensor and a scalar (Tensor(...) + 1), the scalar could be wrapped in a Tensor(1) and interpreted as:

  1. Sparse matrix with a background of 1,
  2. Sparse matrix with background value 0, filled with 1,
  3. Dense matrix Tensor(Dense(Element([1]))).
@mtsokol mtsokol added the enhancement New feature or request label Mar 26, 2024
@hameerabbasi
Copy link
Collaborator

Paraphrasing my comments from Slack -- We should go with the first option, partially because that's the only backwards-compatible option. The concerns about a new kernel for each background value can be handled via a hybrid approach: We compile a new kernel iff something is an identity or an absorbing value of a registered function; otherwise we resort to treating the background value as a runtime value rather than compile-time constant.

@willow-ahrens
Copy link
Collaborator

willow-ahrens commented Mar 26, 2024 via email

@willow-ahrens
Copy link
Collaborator

We should also figure out syntax to overload/avoid this behavior. If the default is to promote some values to constants, what is the way that we declare constant and what is the way we declare variable?

@willow-ahrens
Copy link
Collaborator

I just wanted to make a heads up: Tensor(Element(0.0)) is not very optimized for scalars. we should benchmark it to see what the overhead is before making changes, but I just thought I would give a heads up. This is in response to #62

@willow-ahrens
Copy link
Collaborator

A thought: let's produce a "scalar" constructor which dispatches to either a StaticScalar or a DynamicScalar type, and also expose an interface to target those two directly.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

3 participants