Skip to content

Commit

Permalink
Improve scaling in Rayleigh quotient example (#14)
Browse files Browse the repository at this point in the history
* Improve scaling in Rayleigh quotient example
* Rephrase a sentence about the tradeoff between Euclidean gradient & conversion versus providing a Riemannian gradient directly.

Also recheck a few formatting issues, that were done while working with vale.

* Switch Documenter to use Julia 1.10 as well.
* Add `N` to ArmijoLinesearch.
* bump version

---------

Co-authored-by: Ronny Bergmann <[email protected]>
  • Loading branch information
mateuszbaran and kellertuer authored Feb 15, 2024
1 parent 5d60ce5 commit 819b492
Show file tree
Hide file tree
Showing 17 changed files with 58 additions and 52 deletions.
2 changes: 1 addition & 1 deletion .github/workflows/TagBot.yml
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ on:
workflow_dispatch:
inputs:
lookback:
default: 3
default: '3'
permissions:
actions: read
checks: read
Expand Down
2 changes: 1 addition & 1 deletion .github/workflows/ci.yml
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ jobs:
runs-on: ${{ matrix.os }}
strategy:
matrix:
julia-version: ["1.8", "1.9"]
julia-version: ["1.8", "1.9", "1.10"]
os: [ubuntu-latest, macOS-latest, windows-latest]
steps:
- uses: actions/checkout@v2
Expand Down
4 changes: 2 additions & 2 deletions .github/workflows/documenter.yml
Original file line number Diff line number Diff line change
Expand Up @@ -13,10 +13,10 @@ jobs:
- uses: actions/checkout@v3
- uses: quarto-dev/quarto-actions/setup@v2
with:
version: 1.3.361
version: "1.3.361"
- uses: julia-actions/setup-julia@latest
with:
version: 1.9
version: "1.10"
- name: Julia Cache
uses: julia-actions/cache@v1
- name: Cache Quarto
Expand Down
2 changes: 1 addition & 1 deletion CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -43,7 +43,7 @@ please refactor the code, such that the gradient, or other function is in the co
`src/functions/` and follows the naming scheme:

* cost functions are always of the form `cost_` and a fitting name
* gradient functions are always of the the `gradient_` and a fitting name, followed by an `!`
* gradient functions are always of the `gradient_` and a fitting name, followed by an `!`
for in-place gradients and by `!!` if it is a `struct` that can provide both.

It would be great if you could also add a small test for the functions and the problem you
Expand Down
2 changes: 1 addition & 1 deletion Project.toml
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
name = "ManoptExamples"
uuid = "5b8d5e80-5788-45cb-83d6-5e8f1484217d"
authors = ["Ronny Bergmann <[email protected]>"]
version = "0.1.4"
version = "0.1.5"

[deps]
LinearAlgebra = "37e2e46d-f89d-539d-b4ee-838fcccc9c8e"
Expand Down
2 changes: 1 addition & 1 deletion examples/Bezier-curves.qmd
Original file line number Diff line number Diff line change
Expand Up @@ -168,7 +168,7 @@ pB_opt = gradient_descent(
f,
grad_f,
x0;
stepsize=ArmijoLinesearch(;
stepsize=ArmijoLinesearch(N;
initial_stepsize=1.0,
retraction_method=ExponentialRetraction(),
contraction_factor=0.5,
Expand Down
8 changes: 4 additions & 4 deletions examples/Difference-of-Convex-Benchmark.qmd
Original file line number Diff line number Diff line change
Expand Up @@ -11,10 +11,10 @@ and the Difference of Convex Proximal Point Algorithm (DCPPA) [SouzaOliveira:201
Difference of Convex (DC) problems of the form. This Benchmark reproduces the results from [BergmannFerreiraSantosSouza:2023](@cite), Section 7.1.

```math
\operatorname*{arg\,min}_{p\in\mathcal M}\ \ g(p) - h(p)
\operatorname*{arg\,min}_{p\mathcal M}\ \ g(p) - h(p)
```

where $g,h\colon \mathcal M \to \mathbb R$ are geodesically convex function on the Riemannian manifold $\mathcal M$.
where $g,h\colon \mathcal M \mathbb R$ are geodesically convex function on the Riemannian manifold $\mathcal M$.

```{julia}
#| echo: false
Expand Down Expand Up @@ -49,7 +49,7 @@ teal = paul_tol["mutedteal"]
We start with defining the two convex functions $g,h$ and their gradients as well as the DC problem $f$ and its gradient for the problem

```math
\operatorname*{arg\,min}_{p\in\mathcal M}\ \ \bigl( \log\bigr(\det(p)\bigr)\bigr)^4 - \bigl(\log \det(p) \bigr)^2.
\operatorname*{arg\,min}_{p\mathcal M}\ \ \bigl( \log\bigr(\det(p)\bigr)\bigr)^4 - \bigl(\log \det(p) \bigr)^2.
```

where the critical points obtain a functional value of $-\frac{1}{4}$.
Expand Down Expand Up @@ -98,7 +98,7 @@ check_gradient(M, h, grad_h, p0, X0; plot=true)
```

which both pass the test.
We continue to define their inplace variants
We continue to define their in-place variants

```{julia}
#| output: false
Expand Down
8 changes: 4 additions & 4 deletions examples/Difference-of-Convex-Rosenbrock.qmd
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ This example is the code that produces the results in [BergmannFerreiraSantosSou
Both the Rosenbrock problem

```math
\operatorname*{argmin}_{x\in\mathbb R^2} a\bigl( x_1^2-x_2\bigr)^2 + \bigl(x_1-b\bigr)^2,
\operatorname*{argmin}_{x\in^2} a\bigl( x_1^2-x_2\bigr)^2 + \bigl(x_1-b\bigr)^2,
```

where $a,b>0$ and usually $b=1$ and $a \gg b$,
Expand All @@ -33,7 +33,7 @@ and also the (Euclidean) gradient

They are even available already here in `ManifoldExamples.jl`, see ``[`RosenbrockCost`](@ref ManoptExamples.RosenbrockCost)``{=commonmark} and ``[`RosenbrockGradient!!`](@ref ManoptExamples.RosenbrockGradient!!)``{=commonmark}.

Furthermore, the ``[`RosenbrockMetric`](@ref ManoptExamples.RosenbrockMetric)``{=commonmark} can be used on $\mathbb R^2$, that is
Furthermore, the ``[`RosenbrockMetric`](@ref ManoptExamples.RosenbrockMetric)``{=commonmark} can be used on $^2$, that is

```math
⟨X,Y⟩_{\mathrm{Rb},p} = X^\mathrm{T}G_pY, \qquad
Expand Down Expand Up @@ -233,7 +233,7 @@ function docE_∇f!(M, X, p)
end
```

Then we call the [difference of convex algorithm](https://manoptjl.org/stable/solvers/difference_of_convex/#Manopt.difference_of_convex_algorithm) on Eucldiean space $\mathbb R^2$.
Then we call the [difference of convex algorithm](https://manoptjl.org/stable/solvers/difference_of_convex/#Manopt.difference_of_convex_algorithm) on Eucldiean space $^2$.

```{julia}
E_doc_state = difference_of_convex_algorithm(
Expand Down Expand Up @@ -279,7 +279,7 @@ end
```

While the cost of the subgradient can be infered automaticallty, we also have to provide the gradient of the sub problem.
For $X \in \partial h(p^{(k)})$ the sunproblem top determine $p^{(k+1)}$ reads
For $X \in h(p^{(k)})$ the sunproblem top determine $p^{(k+1)}$ reads

```math
\operatorname*{argmin}_{p\in\mathcal M} g(p) - \langle X, \log_{p^{(k)}}p\rangle
Expand Down
12 changes: 8 additions & 4 deletions examples/RayleighQuotient.qmd
Original file line number Diff line number Diff line change
Expand Up @@ -66,7 +66,7 @@ Pkg.activate("."); # use the example environment,
using LRUCache, BenchmarkTools, LinearAlgebra, Manifolds, ManoptExamples, Manopt, Random
Random.seed!(42)
n = 500
A = Symmetric(randn(n,n))
A = Symmetric(randn(n, n) / n)
```

And the manifolds
Expand Down Expand Up @@ -102,8 +102,8 @@ X = zero_vector(M, p0)
we can both call

```{julia}
Y = grad_f(M,p0) # Allocates memory
grad_f(M,X,p0) # Computes in place of X and returns the result in X.
Y = grad_f(M, p0) # Allocates memory
grad_f(M, X, p0) # Computes in place of X and returns the result in X.
norm(M, p0, X-Y)
```

Expand Down Expand Up @@ -162,7 +162,11 @@ We can also benchmark both
@benchmark gradient_descent($M, $f, $grad_f, $p0)
```

We see, that the conversion costs a bit of performance, but if the Euclidean gradient is easier to compute, this might still be ok.
From these results we see, that the conversion from the Euclidean to the Riemannian gradient
does require a small amount of effort and hence reduces the performance slighly.
Still, if the Euclidean Gradient is easier to compute or already available, this is in terms
of coding the faster way. Finally this is a tradeoff between derivation and implementation
efforts for the Riemannian gradient and a slight performance reduction when using the Euclidean one.

### A Solver based (also) on (approximate) Hessian information
To also involve the Hessian, we consider the [trust regions](https://manoptjl.org/stable/solvers/trust_regions/) solver with three cases:
Expand Down
6 changes: 3 additions & 3 deletions examples/Riemannian-mean.qmd
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ date: 07/02/2023
Each of the example objectives or problems stated in this package should
be accompanied by a [Quarto](https://quarto.org) notebook that illustrates their usage, like this one.

For this first example, the objective is a very common one, for example also used in the [Get Started: Optimize!](https://manoptjl.org/stable/tutorials/Optimize!/) tutorial of [Manopt.jl](https://manoptjl.org/).
For this first example, the objective is a very common one, for example also used in the [Get started: optimize!](https://manoptjl.org/stable/tutorials/Optimize!/) tutorial of [Manopt.jl](https://manoptjl.org/).

The second goal of this tutorial is to also illustrate how this package provides these examples, namely in both an easy-to-use and a performant way.

Expand Down Expand Up @@ -44,7 +44,7 @@ data = [exp(M, p, σ * rand(M; vector_at=p)) for i in 1:n];

We can define both the cost and gradient, ``[`RiemannianMeanCost`](@ref ManoptExamples.RiemannianMeanCost)``{=commonmark} and ``[`RiemannianMeanGradient!!`](@ref ManoptExamples.RiemannianMeanGradient!!)``{=commonmark}, respectively.
For their mathematical derivation and further explanations,
we again refer to [Get Started: Optimize!](https://manoptjl.org/stable/tutorials/Optimize!/).
we again refer to [Get started: optimize!](https://manoptjl.org/stable/tutorials/Optimize!/).

```{julia}
#| output: false
Expand All @@ -62,7 +62,7 @@ x1 = gradient_descent(M, f, grad_f, first(data))
## Variant 2: Using the objective

A shorter way to directly obtain the [Manifold objective](https://manoptjl.org/stable/plans/objective/) including these two functions.
Here, we want to specify that the objective can do inplace-evaluations using the `evaluation=`-keyword. The objective can be obtained calling
Here, we want to specify that the objective can do in-place-evaluations using the `evaluation=`-keyword. The objective can be obtained calling
``[`Riemannian_mean_objective`](@ref ManoptExamples.Riemannian_mean_objective)``{=commonmark} as

```{julia}
Expand Down
9 changes: 5 additions & 4 deletions examples/Total-Variation.qmd
Original file line number Diff line number Diff line change
Expand Up @@ -41,9 +41,9 @@ where $α > 0$ is a weight parameter.

The challenge here is that most classical algorithm, like gradient descent or Quasi Newton, assume the cost $f(p)$ to be smooth such that the gradient exists at every point. In our setting that is not the case since the distacen is not differentiable for any $p_i=p_{i+1}$. So we have to use another technique.

## THe Cyclic Proximal Point algorithm
## The Cyclic Proximal Point algorithm

If the cost consists of a sum of functions, where each of the proximal maps is “easy to evaluate”, for best of cases in closed form, we can “apply the proximal maps in a cyclic fashion” and optain the the [Cyclic Proximal Point Algorithm](https://manoptjl.org/stable/solvers/cyclic_proximal_point/) [Bacak:2014](@cite).
If the cost consists of a sum of functions, where each of the proximal maps is “easy to evaluate”, for best of cases in closed form, we can “apply the proximal maps in a cyclic fashion” and optain the [Cyclic Proximal Point Algorithm](https://manoptjl.org/stable/solvers/cyclic_proximal_point/) [Bacak:2014](@cite).

Both for the distance and the squared distance, we have [generic implementations](https://juliamanifolds.github.io/ManifoldDiff.jl/stable/library/#Proximal-Maps); since this happens in a cyclic manner, there is also always one of the arguments involved in the prox and never both.
We can improve the performance slightly by computing all proes in parallel that do not interfer. To be precise we can compute first all proxes of distances in the regularizer that start with an odd index in parallel. Afterwards all that start with an even index.
Expand All @@ -65,6 +65,7 @@ ENV["GKSwstype"] = "100"
#| output: false
using Manifolds, Manopt, ManoptExamples, ManifoldDiff
using ManifoldDiff: prox_distance
using ManoptExamples: prox_Total_Variation
n = 500 #Signal length
σ = 0.2 # amount of noise
α = 0.5# in the TV model
Expand Down Expand Up @@ -120,7 +121,7 @@ Defining cost and the proximal maps, which are actually 3 proxes to be precise.
```{julia}
#| output: false
f(N, p) = ManoptExamples.L2_Total_Variation(N, s, α, p)
proxes_f = ((N, λ, p) -> prox_distance(N, λ, s, p, 2), (N, λ, p) -> prox_TV(N, α * λ, p))
proxes_f = ((N, λ, p) -> prox_distance(N, λ, s, p, 2), (N, λ, p) -> prox_Total_Variation(N, α * λ, p))
```

We run the algorithm
Expand Down Expand Up @@ -196,7 +197,7 @@ We can generalize the total variation also to a second order total variation. Ag

Another extension for both first and second order TV is to apply this for manifold-valued images $S = (S_{i,j})_{i,j=1}^{m,n} \in \mathcal M^{m,n}$, where the distances in the regularizer are then used in both the first dimension $i$ and the second dimension $j$ in the data.

## Technical Details
## Technical details

This version of the example was generated with the following package versions.

Expand Down
2 changes: 1 addition & 1 deletion examples/_quarto.yml
Original file line number Diff line number Diff line change
Expand Up @@ -22,4 +22,4 @@ format:
variant: -raw_html+tex_math_dollars
wrap: preserve

jupyter: julia-1.9
jupyter: julia-1.10
18 changes: 9 additions & 9 deletions src/data/artificial_images.jl
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
generate an artificial InSAR image, i.e. phase valued data, of size `pts` x
`pts` points.
This data set was introduced for the numerical examples in [Bergmann et. al., SIAM J Imag Sci, 2014](@cite BergmannLausSteidlWeinmann:2014:1).
This data set was introduced for the numerical examples in [BergmannLausSteidlWeinmann:2014:1](@cite).
"""
function artificialIn_SAR_image(pts::Integer)
# variables
Expand Down Expand Up @@ -57,9 +57,9 @@ end
Generate an artificial image of data on the 2 sphere,
# Arguments
* `pts` (`64`) size of the image in `pts`×`pts` pixel.
* `pts`: (`64`) size of the image in `pts`×`pts` pixel.
This example dataset was used in the numerical example in Section 5.5 of [Laus et al., SIAM J Imag Sci., 2017](@cite LausNikolovaPerschSteidl:2017)
This example dataset was used in the numerical example in Section 5.5 of [LausNikolovaPerschSteidl:2017](@cite)
It is based on [`artificial_S2_rotation_image`](@ref) extended by small whirl patches.
"""
Expand Down Expand Up @@ -105,7 +105,7 @@ create a whirl within the `pts`×`pts` patch of
These patches are used within [`artificial_S2_whirl_image`](@ref).
# Optional Parameters
* `pts` (`5`) size of the patch. If the number is odd, the center is the north pole.
* `pts`: (`5`) size of the patch. If the number is odd, the center is the north pole.
"""
function artificial_S2_whirl_patch(pts::Int=5)
patch = fill([0.0, 0.0, -1.0], pts, pts)
Expand All @@ -130,7 +130,7 @@ end
create an artificial image of symmetric positive definite matrices of size
`pts`×`pts` pixel with a jump of size `stepsize`.
This dataset was used in the numerical example of Section 5.2 of [Bačák et al., SIAM J Sci Comput, 2016](@cite BacakBergmannSteidlWeinmann:2016).
This dataset was used in the numerical example of Section 5.2 of [BacakBergmannSteidlWeinmann:2016](@cite).
"""
function artificial_SPD_image(pts::Int=64, stepsize=1.5)
r = range(0; stop=1 - 1 / pts, length=pts)
Expand Down Expand Up @@ -164,7 +164,7 @@ end
create an artificial image of symmetric positive definite matrices of size
`pts`×`pts` pixel with right hand side `fraction` is moved upwards.
This data set was introduced in the numerical examples of Section of [Bergmann, Presch, Steidl, SIAM J Imag Sci, 2016](@cite BergmannPerschSteidl:2016)
This data set was introduced in the numerical examples of Section of [BergmannPerschSteidl:2016](@cite)
"""
function artificial_SPD_image2(pts=64, fraction=0.66)
Zl = 4.0 * Matrix{Float64}(I, 3, 3)
Expand Down Expand Up @@ -216,10 +216,10 @@ end
Create an image with a rotation on each axis as a parametrization.
# Optional Parameters
* `pts` (`64`) number of pixels along one dimension
* `rotations` (`(.5,.5)`) number of total rotations performed on the axes.
* `pts`: (`64`) number of pixels along one dimension
* `rotations`: (`(.5,.5)`) number of total rotations performed on the axes.
This dataset was used in the numerical example of Section 5.1 of [Bačák et al., SIAM J Sci Comput, 2016](@cite BacakBergmannSteidlWeinmann:2016).
This dataset was used in the numerical example of Section 5.1 of [BacakBergmannSteidlWeinmann:2016](@cite).
"""
function artificial_S2_rotation_image(
pts::Int=64, rotations::Tuple{Float64,Float64}=(0.5, 0.5)
Expand Down
15 changes: 8 additions & 7 deletions src/data/artificial_signals.jl
Original file line number Diff line number Diff line change
Expand Up @@ -6,10 +6,10 @@ Creates a Signal of (phase-valued) data represented on the
[`Circle`](hhttps://juliamanifolds.github.io/Manifolds.jl/latest/manifolds/circle.html) with increasing slope.
# Optional
* `pts` (`500`) number of points to sample the function.
* `slope` (`4.0`) initial slope that gets increased afterwards
* `pts`: (`500`) number of points to sample the function.
* `slope`: (`4.0`) initial slope that gets increased afterwards
This data set was introduced for the numerical examples in [Bergmann et. al., SIAM J Imag Sci, 2014](@cite BergmannLausSteidlWeinmann:2014:1)
This data set was introduced for the numerical examples in [BergmannLausSteidlWeinmann:2014:1](@cite)
"""
Expand Down Expand Up @@ -41,10 +41,11 @@ end
generate a real-valued signal having piecewise constant, linear and quadratic
intervals with jumps in between. If the resulting manifold the data lives on,
is the [`Circle`](hhttps://juliamanifolds.github.io/Manifolds.jl/latest/manifolds/circle.html)
the data is also wrapped to ``[-\pi,\pi)``. This is data for an example from [Bergmann et. al., SIAM J Imag Sci, 2014](@cite BergmannLausSteidlWeinmann:2014:1).
the data is also wrapped to ``[BergmannLausSteidlWeinmann:2014:1](@cite).
# Optional
* `pts` – (`500`) number of points to sample the function
* `pts`: (`500`) number of points to sample the function
"""
function artificial_S1_signal(pts::Integer=500)
t = range(0.0, 1.0; length=pts)
Expand All @@ -54,7 +55,7 @@ end
@doc raw"""
artificial_S1_signal(x)
evaluate the example signal ``f(x), x ∈ [0,1]``,
of phase-valued data introduces in Sec. 5.1 of [Bergmann et. al., SIAM J Imag Sci, 2014](@cite BergmannLausSteidlWeinmann:2014:1)
of phase-valued data introduces in Sec. 5.1 of [BergmannLausSteidlWeinmann:2014:1](@cite)
for values outside that interval, this Signal is `missing`.
"""
function artificial_S1_signal(x::Real)
Expand Down Expand Up @@ -85,7 +86,7 @@ end
@doc raw"""
artificial_S2_composite_Bezier_curve()
Generate a composite Bézier curve on the [Sphere]() ``\mathbb S^2`` that was used in [Bergmann, Gousenbourger, Front. Appl. Math. Stat., 2018](@cite BergmannGousenbourger:2018).
Generate a composite Bézier curve on the [BergmannGousenbourger:2018](@cite).
It consists of 4 egments connecting the points
```math
Expand Down
12 changes: 6 additions & 6 deletions src/objectives/BezierCurves.jl
Original file line number Diff line number Diff line change
Expand Up @@ -197,14 +197,14 @@ controlpoints or a.
This method reduces the points depending on the optional `reduce` symbol
* `:default` no reduction is performed
* `:continuous` for a continuous function, the junction points are doubled at
* `:default`: no reduction is performed
* `:continuous`: for a continuous function, the junction points are doubled at
``b_{0,i}=b_{n_{i-1},i-1}``, so only ``b_{0,i}`` is in the vector.
* `:differentiable` for a differentiable function additionally
* `:differentiable`: for a differentiable function additionally
``\log_{b_{0,i}}b_{1,i} = -\log_{b_{n_{i-1},i-1}}b_{n_{i-1}-1,i-1}`` holds.
hence ``b_{n_{i-1}-1,i-1}`` is omitted.
If only one segment is given, all points of `b` – i.e. `b.pts` is returned.
If only one segment is given, all points of `b`, `b.pts`, is returned.
"""
function get_Bezier_points(
M::AbstractManifold, B::AbstractVector{<:BezierSegment}, reduce::Symbol=:default
Expand Down Expand Up @@ -259,12 +259,12 @@ see also [`get_Bezier_points`](@ref). For ease of the following, let ``c=(c_1,
and ``d=(d_1,…,d_m)``, where ``m`` denotes the number of components the composite Bézier
curve consists of. Then
* `:default` ``k = m + \sum_{i=1}^m d_i`` since each component requires one point more than
* `:default`: ``k = m + \sum_{i=1}^m d_i`` since each component requires one point more than
its degree. The points are then ordered in tuples, i.e.
````math
B = \bigl[ [c_1,…,c_{d_1+1}], (c_{d_1+2},…,c_{d_1+d_2+2}],…, [c_{k-m+1+d_m},…,c_{k}] \bigr]
````
* `:continuous` ``k = 1+ \sum_{i=1}{m} d_i``, since for a continuous curve start and end
* `:continuous`: ``k = 1+ \sum_{i=1}{m} d_i``, since for a continuous curve start and end
point of successive components are the same, so the very first start point and the end
points are stored.
````math
Expand Down
2 changes: 1 addition & 1 deletion src/objectives/RobustPCA.jl
Original file line number Diff line number Diff line change
Expand Up @@ -111,7 +111,7 @@ parameter ``ε``.
!!! note
Since the construction is independent of the manifold, that argument is optional and
mainly provided to comply with other objectives. Similarly, independent of the `evaluation`,
indeed the gradient always allows for both the allocating and the inplace variant to be used,
indeed the gradient always allows for both the allocating and the in-place variant to be used,
though that keyword is used to setup the objective.
"""
function robust_PCA_objective(
Expand Down
4 changes: 2 additions & 2 deletions src/objectives/TotalVariation.jl
Original file line number Diff line number Diff line change
Expand Up @@ -291,11 +291,11 @@ end
X = grad_Total_Variation(M, λ, x[, p=1])
grad_Total_Variation!(M, X, λ, x[, p=1])
Compute the (sub)gradient ``\partial F`` of all forward differences occurring,
Compute the (sub)gradient ``∂f`` of all forward differences occurring,
in the power manifold array, i.e. of the function
```math
F(x) = \sum_{i}\sum_{j ∈ \mathcal I_i} d^p(x_i,x_j)
f(p) = \sum_{i}\sum_{j ∈ \mathcal I_i} d^p(x_i,x_j)
```
where ``i`` runs over all indices of the `PowerManifold` manifold `M`
Expand Down

2 comments on commit 819b492

@kellertuer
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@JuliaRegistrator
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Registration pull request created: JuliaRegistries/General/100950

Tip: Release Notes

Did you know you can add release notes too? Just add markdown formatted text underneath the comment after the text
"Release notes:" and it will be added to the registry PR, and if TagBot is installed it will also be added to the
release that TagBot creates. i.e.

@JuliaRegistrator register

Release notes:

## Breaking changes

- blah

To add them here just re-invoke and the PR will be updated.

Tagging

After the above pull request is merged, it is recommended that a tag is created on this repository for the registered package version.

This will be done automatically if the Julia TagBot GitHub Action is installed, or can be done manually through the github interface, or via:

git tag -a v0.1.5 -m "<description of version>" 819b492330bdc437134c7d748dc0469a1039de0c
git push origin v0.1.5

Please sign in to comment.