You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This is a continuation of #767
Mainly surrounding: derivates within TemplateExpression, Miles has a found an elegant solution to higher order derivates.
I had two main questions:
I think you could potentially use eval_grad_tree_array within the TemplateStructure though I haven't tried. You can extract the tree with DE.get_tree(f) (where DE is DynamicExpressions)1 and then DE.get_operators(f) to get the operators2. Then the call is:
tree = DE.get_tree(f)
operators = DE.get_operators(f)
X =stack((x1.x, x2.x, #= ... =#), dims=1)
result, grad, complete =eval_grad_tree_array(tree, X, operators)
Footnotes
All it's doing is calling f.tree. The reason to prefer the get_tree is that it's more general in case there is some other type of expression used here in the future. ↩
All it is doing is calling f.metadata.operators. ↩
I've done this so far:
using DynamicDiff: DynamicDiff as DA
using DynamicExpressions: DynamicExpressions as DE
using SymbolicRegression.TemplateExpressionModule: ArgumentRecorder
DA.D(f::ArgumentRecorder, _::Integer) = f
# Example structure with monotonicity checks for multiple variables (x1, x2, x3, x4, x5)
structure =TemplateStructure{(:f,)}(
((; f), (x1, x2, x3, x4, x5, x6, y, category)) ->begin
o =f(x1, x2, x3, x4, x5, x6)
if!o.valid
returnValidVector(fill(1e9, length(o.x)), false)
end
tree = DE.get_tree(f.tree)
operators = DE.get_operators(f)
X =stack((x1.x, x2.x, x3.x, x4.x, x5.x), dims=1)
result, grad, complete =eval_grad_tree_array(tree, X, operators) #...continuation of code
However, neither get_tree (f) or get_operators(f) work.
ERROR: type ArgumentRecorder has no field tree
...
(ERROR: MethodError: no method matching get_operators(::ArgumentRecorder{Compat.Fix{1, Compat.Fix{1, typeof(SymbolicRegression.TemplateExpressionModule._record_composable_expression!), @NamedTuple{f::Base.RefValue{Int64}}}, Val{:f}}}))
Users may standardise their variables on input, but domain knowledge means that those functions are unlikely to work with standardised variables. The workaround for me has been to feed in the variables I need as non-standardised variables:
then using the _NS variables within the evaluation. This works, but I suppose is not memory efficient. I was wondering if you had any suggestions regarding that aspect? Essentially, you could replace a custom loss function into structure, instead of calling on global variables etc.
Thank you! The problem is now I am obsessing about getting this code efficient and losing sight of the actual problem!
EDIT: After lots of trouble shooting, D() generally works as fast as eval_grad_tree_array and so it was not necessary to get it to work. And feeding in extra variables did not seem to negatively affect performance, only what was fed through the functions.
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
This is a continuation of #767
Mainly surrounding: derivates within TemplateExpression, Miles has a found an elegant solution to higher order derivates.
I had two main questions:
.
Building complex TemplateExpression structures #767 (reply in thread)
I've done this so far:
However, neither get_tree (f) or get_operators(f) work.
then using the _NS variables within the evaluation. This works, but I suppose is not memory efficient. I was wondering if you had any suggestions regarding that aspect? Essentially, you could replace a custom loss function into structure, instead of calling on global variables etc.
Thank you! The problem is now I am obsessing about getting this code efficient and losing sight of the actual problem!
EDIT: After lots of trouble shooting, D() generally works as fast as eval_grad_tree_array and so it was not necessary to get it to work. And feeding in extra variables did not seem to negatively affect performance, only what was fed through the functions.
Beta Was this translation helpful? Give feedback.
All reactions