Skip to content

More tweaks#375

Draft
kshyatt wants to merge 2 commits intomainfrom
ksh/cuda_tweaks
Draft

More tweaks#375
kshyatt wants to merge 2 commits intomainfrom
ksh/cuda_tweaks

Conversation

@kshyatt
Copy link
Copy Markdown
Member

@kshyatt kshyatt commented Feb 18, 2026

Needed to get more MPSKit examples working

Comment thread ext/TensorKitCUDAExt/auxiliary.jl Outdated
Comment thread ext/TensorKitCUDAExt/cutensormap.jl Outdated
Comment thread ext/TensorKitCUDAExt/cutensormap.jl Outdated
Comment thread ext/TensorKitCUDAExt/cutensormap.jl Outdated
Comment thread ext/TensorKitCUDAExt/cutensormap.jl Outdated
Comment thread ext/TensorKitCUDAExt/cutensormap.jl Outdated
Comment thread src/tensors/braidingtensor.jl Outdated
Comment thread src/tensors/treetransformers.jl Outdated
@codecov
Copy link
Copy Markdown

codecov Bot commented Feb 26, 2026

Codecov Report

❌ Patch coverage is 45.83333% with 13 lines in your changes missing coverage. Please review.

Files with missing lines Patch % Lines
src/tensors/abstracttensor.jl 25.00% 6 Missing ⚠️
ext/TensorKitCUDAExt/cutensormap.jl 0.00% 3 Missing ⚠️
src/tensors/braidingtensor.jl 60.00% 2 Missing ⚠️
src/tensors/adjoint.jl 50.00% 1 Missing ⚠️
src/tensors/indexmanipulations.jl 75.00% 1 Missing ⚠️
Files with missing lines Coverage Δ
src/tensors/diagonal.jl 86.97% <100.00%> (-2.80%) ⬇️
src/tensors/tensor.jl 82.75% <100.00%> (-1.73%) ⬇️
src/tensors/adjoint.jl 87.50% <50.00%> (-2.50%) ⬇️
src/tensors/indexmanipulations.jl 73.41% <75.00%> (-0.53%) ⬇️
src/tensors/braidingtensor.jl 68.78% <60.00%> (-1.61%) ⬇️
ext/TensorKitCUDAExt/cutensormap.jl 70.83% <0.00%> (-3.84%) ⬇️
src/tensors/abstracttensor.jl 53.73% <25.00%> (-1.58%) ⬇️

... and 19 files with indirect coverage changes

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@kshyatt kshyatt marked this pull request as draft February 27, 2026 11:14
@kshyatt
Copy link
Copy Markdown
Member Author

kshyatt commented Feb 27, 2026

Let's make this a draft too to cut down on CI thrash

@kshyatt kshyatt force-pushed the ksh/cuda_tweaks branch 2 times, most recently from f5857b3 to 32e182d Compare March 12, 2026 12:36
@kshyatt kshyatt force-pushed the ksh/cuda_tweaks branch 2 times, most recently from f5faaf6 to 2359d28 Compare March 23, 2026 14:24
@lkdvos lkdvos mentioned this pull request Mar 26, 2026
Copy link
Copy Markdown
Member

@lkdvos lkdvos left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Left some comment throughout, there are some things that I am not entirely convinced by but the rest looks great, thanks for working through all of this!

For the similarstoragetype(tensor, storagetype) calls that you added, this seems like something we should probably discuss over a separate PR, and it would be great if we could consolidate this one to get the remainder of the fixes in.
Would you be up for splitting these two things, and then getting this merged?

The same kind of holds for some of the comments I made too, if we can just postpone the things that are not obvious, but already get the other parts in, that would probably be helpful.

(Note that I am very much aware that none of this is your fault and this PR has lived for too long so the design shifts a bit, for which I do apologize!)

Comment thread ext/TensorKitCUDAExt/cutensormap.jl Outdated
Comment thread src/tensors/abstracttensor.jl Outdated
Comment thread ext/TensorKitCUDAExt/cutensormap.jl Outdated
Comment thread ext/TensorKitCUDAExt/cutensormap.jl Outdated
Comment thread src/tensors/abstracttensor.jl
Comment thread src/tensors/indexmanipulations.jl Outdated
Comment thread src/tensors/indexmanipulations.jl Outdated
Comment thread src/tensors/indexmanipulations.jl Outdated
Comment thread src/tensors/tensoroperations.jl Outdated
@kshyatt
Copy link
Copy Markdown
Member Author

kshyatt commented Mar 31, 2026

It's completely fine!! This has stayed open as I work through adding more tests for MPSKit, so I think we can pare off the simpler stuff we agree on, and then discuss things that are more contentious.

@github-actions
Copy link
Copy Markdown
Contributor

github-actions Bot commented Mar 31, 2026

Your PR no longer requires formatting changes. Thank you for your contribution!

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I guess the changes here are no longer necessary since BraidingTensor now knows its own storagetype?

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm checking this now with MPSKit!

W = insertleftunit(space(t), Val(i); conj, dual)
if t isa TensorMap
return TensorMap{scalartype(t)}(copy ? Base.copy(t.data) : t.data, W)
return TensorMapWithStorage{scalartype(t), storagetype(t)}(copy ? Base.copy(t.data) : t.data, W)
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This doesn't look necessary, isn't the storagetype automatically deduced from the provided data vector?

W = insertrightunit(space(t), Val(i); conj, dual)
if t isa TensorMap
return TensorMap{scalartype(t)}(copy ? Base.copy(t.data) : t.data, W)
return TensorMapWithStorage{scalartype(t), storagetype(t)}(copy ? Base.copy(t.data) : t.data, W)
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same comment here

W = removeunit(space(t), Val(i))
if t isa TensorMap
return TensorMap{scalartype(t)}(copy ? Base.copy(t.data) : t.data, W)
return TensorMapWithStorage{scalartype(t), storagetype(t)}(copy ? Base.copy(t.data) : t.data, W)
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same comment here

Comment on lines +107 to +116
function similarstoragetype(::Type{<:AbstractTensorMap{T, S, N₁, N₂}}, ::Type{TA}) where {T <: Number, TA <: DenseVector, S, N₁, N₂}
return similarstoragetype(TA, T)
end
function similarstoragetype(t::AbstractTensorMap{T, S, N₁, N₂}, ::Type{TA}) where {T <: Number, TA <: DenseVector, S, N₁, N₂}
return similarstoragetype(typeof(t), TA)
end

# implement on arrays
similarstoragetype(::Type{A}) where {A <: DenseVector{<:Number}} = A
similarstoragetype(::Type{A}, ::Type{A}) where {A <: DenseVector{<:Number}} = A
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are we using these somewhere? I'm slightly confused about the meaning of these functions with last argument A <: DenseVector. In general, I thought this function mostly meant: for given tensor T1 (and possibly T2), create a new storagetype that is compatible with T1 but has scalartype T. In the case where we pass in a storagetype to begin with, doesn't that mean we already know that the answer has to be A?.

Comment thread src/tensors/adjoint.jl
space(t::AdjointTensorMap) = adjoint(space(parent(t)))
dim(t::AdjointTensorMap) = dim(parent(t))
storagetype(::Type{AdjointTensorMap{T, S, N₁, N₂, TT}}) where {T, S, N₁, N₂, TT} = storagetype(TT)
similarstoragetype(::AdjointTensorMap{T, S, N₁, N₂, TT}, ::Type{T′}) where {T, S, N₁, N₂, TT, T′ <: Number} = similarstoragetype(TT, T′)
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Isn't this also automatic from the definition of similarstoragetype(t::AbstractTensorMap, T) = similarstoragetype(storagetype(t), T) along with storagetype the line above?

Comment thread src/tensors/adjoint.jl
dim(t::AdjointTensorMap) = dim(parent(t))
storagetype(::Type{AdjointTensorMap{T, S, N₁, N₂, TT}}) where {T, S, N₁, N₂, TT} = storagetype(TT)
similarstoragetype(::AdjointTensorMap{T, S, N₁, N₂, TT}, ::Type{T′}) where {T, S, N₁, N₂, TT, T′ <: Number} = similarstoragetype(TT, T′)
similarstoragetype(::AdjointTensorMap{T, S, N₁, N₂, TT}, ::Type{TA}) where {T, S, N₁, N₂, TT, TA <: DenseVector} = similarstoragetype(TT, TA)
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same comment here about why the second argument would already be a <:DenseVector.

Comment on lines +20 to +21
$promote_op(::Type{TA}, ::Type{I}) where {TA <: DenseVector, I <: Sector} =
similarstoragetype(TA, $promote_op(eltype(TA), I))
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is also not really used I guess?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants