-
Notifications
You must be signed in to change notification settings - Fork 43
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix rrules of TensorOperations with DiagonalTensorMap
#210
Conversation
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## master #210 +/- ##
==========================================
+ Coverage 82.17% 82.25% +0.08%
==========================================
Files 43 43
Lines 5424 5433 +9
==========================================
+ Hits 4457 4469 +12
+ Misses 967 964 -3 ☔ View full report in Codecov by Sentry. |
Looks good to me, but I wonder about the |
Honestly that function is such a mess... From the docstring, it should return a mutable array of the given eltype and dimensions, which is precisely what I want here. LinearAlgebra strikes again with their inconsistency: julia> d = Diagonal(rand(2));
julia> similar(d)
2×2 Diagonal{Float64, Vector{Float64}}:
5.0e-324 ⋅
⋅ 1.0e-323
julia> similar(d, size(d))
2×2 Matrix{Float64}:
0.0 5.0e-324
NaN 0.0
julia> similar(d, Float32)
2×2 Diagonal{Float32, Vector{Float32}}:
-1.33821f-27 ⋅
⋅ 4.0f-45
julia> similar(d, Float32, size(d))
2×2 Matrix{Float32}:
2.11663f-24 2.20105f-23
1.0f-45 1.0f-45 In principle, I could manually hook into the |
So even if we choose to mimick the _dA = similar(A, promote_contract(scalartype(ΔC), scalartype(B), scalartype(α)), space(A)) |
So I cracked and just wrote everything in terms of I did find that we silently ignore the partition of the |
Yes, that's a choice I made a long time ago; to facilitate the use case where a user doesn't want to think about a tensor having a bipartition into two sets of indices and thus uses |
Ok so it seems that there is some type instability in the local variables of the anonymous functions that go into the thunk of dA and dB for Thunk{TensorKitChainRulesCoreExt.var"#90#105"{Tuple{Tuple{Int64, Int64}, Tuple{Int64, Int64}}, TensorMap{ComplexF64, ComplexSpace, 3, 1, Vector{ComplexF64}}, Tensor{ComplexF64, ComplexSpace, 3, Vector{ComplexF64}}, Tuple{Tuple{Int64, Int64}, Tuple{Int64}}, Bool, TensorMap{ComplexF64, ComplexSpace, 2, 1, Vector{ComplexF64}}, Tuple{Tuple{Int64}, Tuple{Int64, Int64}}, Bool, ComplexF64, Tuple{}, ProjectTo{Tensor{ComplexF64, ComplexSpace, 3, Vector{ComplexF64}}, @NamedTuple{}}}}
Thunk{F} where F<:(TensorKitChainRulesCoreExt.var"#90#105"{<:Tuple{Union{Tuple{}, Tuple{Int64, Vararg{Int64}}}, Union{Tuple{}, Tuple{Int64, Vararg{Int64}}}}, TensorMap{ComplexF64, ComplexSpace, 3, 1, Vector{ComplexF64}}, Tensor{ComplexF64, ComplexSpace, 3, Vector{ComplexF64}}, Tuple{Tuple{Int64, Int64}, Tuple{Int64}}, Bool, TensorMap{ComplexF64, ComplexSpace, 2, 1, Vector{ComplexF64}}, Tuple{Tuple{Int64}, Tuple{Int64, Int64}}, Bool, ComplexF64, Tuple{}, ProjectTo{Tensor{ComplexF64, ComplexSpace, 3, Vector{ComplexF64}}, @NamedTuple{}}}) and for dB Thunk{TensorKitChainRulesCoreExt.var"#95#110"{Tuple{Tuple{Int64, Int64}, Tuple{Int64, Int64}}, TensorMap{ComplexF64, ComplexSpace, 3, 1, Vector{ComplexF64}}, Tensor{ComplexF64, ComplexSpace, 3, Vector{ComplexF64}}, Tuple{Tuple{Int64, Int64}, Tuple{Int64}}, Bool, TensorMap{ComplexF64, ComplexSpace, 2, 1, Vector{ComplexF64}}, Tuple{Tuple{Int64}, Tuple{Int64, Int64}}, Bool, ComplexF64, Tuple{}, ProjectTo{TensorMap{ComplexF64, ComplexSpace, 2, 1, Vector{ComplexF64}}, @NamedTuple{}}}}
Thunk{F} where F<:(TensorKitChainRulesCoreExt.var"#95#110"{<:Tuple{Union{Tuple{}, Tuple{Int64, Vararg{Int64}}}, Union{Tuple{}, Tuple{Int64, Vararg{Int64}}}}, TensorMap{ComplexF64, ComplexSpace, 3, 1, Vector{ComplexF64}}, Tensor{ComplexF64, ComplexSpace, 3, Vector{ComplexF64}}, Tuple{Tuple{Int64, Int64}, Tuple{Int64}}, Bool, TensorMap{ComplexF64, ComplexSpace, 2, 1, Vector{ComplexF64}}, Tuple{Tuple{Int64}, Tuple{Int64, Int64}}, Bool, ComplexF64, Tuple{}, ProjectTo{TensorMap{ComplexF64, ComplexSpace, 2, 1, Vector{ComplexF64}}, @NamedTuple{}}}), |
I hope my latest commit fixes this. I don't know why now all of a sudden the |
Thanks for fixing this in any case. I guess that function should have been using |
I am also fine what any of those suggestions, whatever you think is most future-proof (probably Val). Anyway, for me this is good to merge or to change to |
Rewrites these rules in terms of
similar
instead ofzerovector
to ensure first contracting, then projecting onto a diagonal input.Fixes #209