Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add support for sum and kronecker product of COO sparse matrices #45

Open
wants to merge 3 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 2 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -3,3 +3,5 @@
*.jl.mem
/Manifest.toml
/docs/build/

.vscode
58 changes: 58 additions & 0 deletions src/coo_linalg.jl
Original file line number Diff line number Diff line change
Expand Up @@ -320,6 +320,64 @@

+(D::Diagonal, A::SparseMatrixCOO) = A + D

function Base.:+(A::SparseMatrixCOO{T1}, B::SparseMatrixCOO{T2}) where {T1<:Number, T2<:Number}
A.n == B.n || throw(ArgumentError("A and B must have the same number of rows"))
albertomercurio marked this conversation as resolved.
Show resolved Hide resolved
A.m == B.m || throw(ArgumentError("A and B must have the same number of columns"))
albertomercurio marked this conversation as resolved.
Show resolved Hide resolved

T = promote_type(T1, T2)

rowval_colvalA = collect(zip(A.rows, A.cols))
rowval_colvalB = collect(zip(B.rows, B.cols))

rowval_colval = union(rowval_colvalA, rowval_colvalB)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think there should be a more efficient to do this. The union will iterate over both vectors, and to compute the sum you do it again. Would it be more efficient to do both at the same time?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

At the moment I don’t have anything in mind. Do you have an idea?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Okay, let's create an issue on the package after this PR is merge to point out to this as a possible improvement.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How does SparseMatrixCSC do it?

rowval = first.(rowval_colval)
colval = last.(rowval_colval)

nzval = similar(rowval, T)
@inbounds for i in eachindex(rowval)
nzval[i] = zero(T)
for j in eachindex(rowval_colvalA)
if rowval_colvalA[j] == rowval_colval[i]
nzval[i] += A.vals[j]
break
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If I am not wrong you assume here that each matrix has unique pair of (row, col) while I don't think it is necessary true.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

But they should be unique, right? I mean, we should merge them in the moment of creating the sparse matrix, no?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Really sorry @albertomercurio , I completely lost track of this PR. There is a discussion on this https://github.com/orgs/JuliaSmoothOptimizers/discussions/51 , and the status is that QuadraticModels is not merging them by default.

end
end
for j in eachindex(rowval_colvalB)
if rowval_colvalB[j] == rowval_colval[i]
nzval[i] += B.vals[j]
break
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

see comment above

end
end
end

return SparseMatrixCOO(A.m, A.n, rowval, colval, nzval)
end

function LinearAlgebra.kron(A::SparseMatrixCOO, B::SparseMatrixCOO)
mA, nA = size(A)
mB, nB = size(B)
out_shape = (mA * mB, nA * nB)
Annz = nnz(A)
Bnnz = nnz(B)

if Annz == 0 || Bnnz == 0
return SparseMatrixCOO(Int[], Int[], T[], out_shape...)

Check warning on line 364 in src/coo_linalg.jl

View check run for this annotation

Codecov / codecov/patch

src/coo_linalg.jl#L364

Added line #L364 was not covered by tests
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This should be tested in the unit tests too

end

row = (A.rows .- 1) .* mB
row = repeat(row, inner = Bnnz)
col = (A.cols .- 1) .* nB
col = repeat(col, inner = Bnnz)
data = repeat(A.vals, inner = Bnnz)

row .+= repeat(B.rows, outer = Annz)
col .+= repeat(B.cols, outer = Annz)

data .*= repeat(B.vals, outer = Annz)

return SparseMatrixCOO(out_shape[1], out_shape[2], row, col, data)
end

# maximum! functions
replace_if_minusinf(val::T, replacement::T) where {T} = (val == -T(Inf)) ? replacement : val
function LinearAlgebra.maximum!(f::Function, v::AbstractVector{T}, A::SparseMatrixCOO{T}) where {T}
Expand Down
15 changes: 15 additions & 0 deletions test/runtests.jl
Original file line number Diff line number Diff line change
Expand Up @@ -293,6 +293,11 @@ end
B_coo = D + A_coo
@test norm(B_csc - B_coo) ≤ sqrt(eps()) * norm(B_csc)
@test issorted(B_coo.cols)

B = sprand(Float64, 20, 20, 0.1)
C_csc = A + B
C_coo = A_coo + SparseMatrixCOO(B)
@test norm(C_csc - C_coo) ≤ sqrt(eps()) * norm(C_csc)
end

@testset "row/col reduce" begin
Expand Down Expand Up @@ -336,3 +341,13 @@ end
maximum!(abs, v_coo', As_coo)
@test norm(v - v_coo) ≤ sqrt(eps()) * norm(v)
end

@testset "Kronecker product" begin
A = sprand(Float64, 10, 15, 0.2)
B = sprand(Float64, 5, 7, 0.3)
A_coo = SparseMatrixCOO(A)
B_coo = SparseMatrixCOO(B)
C = kron(A, B)
C_coo = kron(A_coo, B_coo)
@test norm(C - C_coo) ≤ sqrt(eps()) * norm(C)
end
Loading