-
Notifications
You must be signed in to change notification settings - Fork 17
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
optimize for affine values #216
Comments
Would short-circuiting
I wrote |
Short-circuiting |
This is a bit of a hobby horse for me but I've had very good results from this kind of affine representation in simplifying compute DAGs. There's a brief description of how I did it for Rainier at https://rainier.fit/img/rainier.pdf . It can be done as an IR that expands back into normal |
Note that this can also be helpful when you optimize the code statically, for example you can optimize the following without trying to do things like a re-association pass and rerun constant folding.
I did this in the manifold implementation (statically, tape optimization is the next thing to do) and this removes some more instructions. |
For SDFs like the metaballs where$r_i, p_i$ are the radius and the center for the ball $i$ , and $g$ be some interpolation from 0 to 1:
Oftentimes the interpolation function will be capped at 0 or 1, making some of the summands constants. However, currently there is no way to partially evaluate this and collapse those constant summands.
To optimize for this kind of situation, we can try to track affine values, e.g.$r * c_1 + c_2$ where $r$ is a register and $c_1, c_2$ are constants. We can mark some instructions like $r_3 = r_1 + r_2$ as dead if either $r_1$ or $r_2$ is a constant, or they are both affine values with the same register. And we can later convert this result back to an interval when necessary, i.e. when an instruction that does not produce an affine value takes it.
The text was updated successfully, but these errors were encountered: