Skip to content

Commit

Permalink
Merge pull request #204 from darioizzo/neuralODE_update
Browse files Browse the repository at this point in the history
NeuralODE update to notebooks
  • Loading branch information
bluescarni authored Dec 7, 2024
2 parents 8c506df + 96ea01b commit 93727f5
Show file tree
Hide file tree
Showing 8 changed files with 783 additions and 10 deletions.
3 changes: 3 additions & 0 deletions doc/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -109,6 +109,9 @@
# NOTE: high order variational equations
# take too long in debug mode.
"oppenheimer_volkoff*",
# NOTE: the neuralODE II notebook
# runs a stichastic graident descent which takes long
"NeuralODEs_II",
]

# Force printing traceback to stderr on execution error.
Expand Down
3 changes: 2 additions & 1 deletion doc/examples_ml.rst
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,8 @@ Machine Learning

notebooks/ffnn
notebooks/torch_and_heyoka
notebooks/NeuralODEs
notebooks/NeuralODEs_I
notebooks/NeuralODEs_II
notebooks/NeuralHamiltonianODEs
notebooks/thermoNETs
notebooks/differentiable_atmosphere
2 changes: 1 addition & 1 deletion doc/notebooks/NeuralHamiltonianODEs.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@
"\n",
"*Greydanus, S., Dzamba, M., & Yosinski, J.* (2019). Hamiltonian Neural Networks. Advances in neural information processing systems, 32.\n",
"\n",
"in that only the perturbation of the Hamiltonian is parametrized by a network. Let us consider the same system as [in the previous example](./NeuralODEs.ipynb), but this time using the Hamiltonian formalism. We will shortly summarize, in what follows, the obvious as to show later how to obtain the same symbolically and using *heyoka*.\n",
"in that only the perturbation of the Hamiltonian is parametrized by a network. Let us consider the same system as [in the previous example](./NeuralODEs_I.ipynb), but this time using the Hamiltonian formalism. We will shortly summarize, in what follows, the obvious as to show later how to obtain the same symbolically and using *heyoka*.\n",
"\n",
"Let us first introduce our Lagrangian coordinates $\\mathbf q = [x, y]$ and their derivatives: $\\dot{\\mathbf q} = [v_x, v_y]$. Under this choice we may compute the kinetic energy of the system as:\n",
"\n",
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -5,10 +5,9 @@
"id": "894ad88d",
"metadata": {},
"source": [
"(tut_neural_ode)=\n",
"# Neural ODEs\n",
"# Neural ODEs - I\n",
"\n",
"We here consider, check also [Neural Hamiltonian ODE](<./NeuralHamiltonianODEs.ipynb>) example, a generic system in the form:\n",
"We here consider a generic system in the form:\n",
"\n",
"$$\n",
"\\dot {\\mathbf x} = \\mathbf f(\\mathbf x, \\mathcal N_\\theta(\\mathbf x))\n",
Expand All @@ -23,7 +22,11 @@
"\n",
"Whenever we have a Neural ODE, it is important to be able to define a training pipeline able to change the neural parameters $\\theta$ as to make some loss decrease. \n",
"\n",
"We indicate such a loss with $\\mathcal L(\\mathbf x(t; x_0, \\theta))$ and show in this example how to compute, using *heyoka*, its gradient, and hence how to setup a training pipeline for Neural ODEs."
"We indicate such a loss with $\\mathcal L(\\mathbf x(t; x_0, \\theta))$ and show in this example how to compute, using *heyoka*, its gradient, and hence how to setup a training pipeline for Neural ODEs.\n",
"\n",
"See also:\n",
"* [Neural ODEs II](<./NeuralODEs_II.ipynb>)\n",
"* [Neural Hamiltonian ODEs](<./NeuralHamiltonianODEs.ipynb>)"
]
},
{
Expand Down
766 changes: 766 additions & 0 deletions doc/notebooks/NeuralODEs_II.ipynb

Large diffs are not rendered by default.

2 changes: 1 addition & 1 deletion doc/notebooks/ffnn.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@
"# Feed-Forward Neural Networks\n",
"\n",
"In this tutorial, we will introduce feed-forward neural networks (FFNNs) in *heyoka.py*, and how to use *heyoka.py* FFNN factory to create FFNNs. \n",
"For an example on how to load a FFNN model from torch to *heyoka.py*, please check out: [Interfacing torch to heyoka.py](<./torch_and_heyoka.ipynb>); while for an example on applications of FFNNs in dynamical systems, check out: [Neural Hamiltonian ODE](<./NeuralHamiltonianODEs.ipynb>) and [Neural ODEs](<./NeuralODEs.ipynb>).\n",
"For an example on how to load a FFNN model from torch to *heyoka.py*, please check out: [Interfacing torch to heyoka.py](<./torch_and_heyoka.ipynb>); while for an example on applications of FFNNs in dynamical systems, check out: [Neural Hamiltonian ODE](<./NeuralHamiltonianODEs.ipynb>) and [Neural ODEs - I](<./NeuralODEs_I.ipynb>) and [Neural ODEs - II](<./NeuralODEs_II.ipynb>)\n",
"\n",
"To facilitate the instantiation of FFNNs, *heyoka.py* implements, in its module *model*, a Feed Forward Neural Network factory called ``ffnn()``. The Neural Network inputs $\\mathbf {in}$, with dimensionality $n_{in}$ are fed into a succession of $N \\ge 0 $ neural layers defined by:\n",
"\n",
Expand Down
2 changes: 1 addition & 1 deletion doc/notebooks/single_precision.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@
"\n",
"In previous tutorials we saw how heyoka.py, in addition to the standard [double precision](https://en.wikipedia.org/wiki/Double-precision_floating-point_format), also supports computations in [extended precision](./ext_precision.ipynb) and [arbitrary precision](./arbitrary_precision.ipynb). Starting with version 3.2.0, heyoka.py supports also computations in [single precision](https://en.wikipedia.org/wiki/Single-precision_floating-point_format).\n",
"\n",
"Single-precision computations can lead to substantial performance benefits when high accuracy is not required. In particular, single-precision [batch mode](<./Batch mode overview.ipynb>) can use a SIMD width twice larger than double precision, leading to an increase by a factor of 2 of the computational throughput. In scalar computations, the use of single precision reduces by half the memory usage with respect to double precision, which can help alleviating performance issues in large ODE systems. This can be particularly noticeable in applications such as [neural ODEs](./NeuralODEs.ipynb).\n",
"Single-precision computations can lead to substantial performance benefits when high accuracy is not required. In particular, single-precision [batch mode](<./Batch mode overview.ipynb>) can use a SIMD width twice larger than double precision, leading to an increase by a factor of 2 of the computational throughput. In scalar computations, the use of single precision reduces by half the memory usage with respect to double precision, which can help alleviating performance issues in large ODE systems. This can be particularly noticeable in applications such as [neural ODEs](./NeuralODEs_I.ipynb).\n",
"\n",
"In NumPy, single-precision values are represented via the {py:class}`numpy.single` data type. Correspondingly, and similarly to what explained in the [extended precision](./ext_precision.ipynb) and [arbitrary precision](./arbitrary_precision.ipynb) tutorials, single-precision computations are activated by passing the ``fp_type=numpy.single`` keyword argument to functions and classes in the heyoka.py API.\n",
"\n",
Expand Down
4 changes: 2 additions & 2 deletions doc/notebooks/var_ode_sys.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@
"\\boldsymbol{x} = \\boldsymbol{x}\\left(t, \\boldsymbol{x}_0, t_0, \\boldsymbol{\\alpha} \\right).\n",
"$$\n",
"\n",
"When solving numerically initial-value problems, it is often useful to compute not only the solution, but also its partial derivatives with respect to the initial conditions and/or the parameters. The derivatives with respect to the initial conditions, for instance, are needed for the computation of [chaos indicators](https://en.wikipedia.org/wiki/Lyapunov_exponent) and for [uncertainty propagation](https://en.wikipedia.org/wiki/Propagation_of_uncertainty), and they can also be used to propagate a small neighborhood in phase space around the initial conditions. The derivatives with respect to the parameters of the system are required when formulating optimisation and inversion problems such as orbit determination, trajectory optimisation and training of neural networks in [neural ODEs](./NeuralODEs.ipynb).\n",
"When solving numerically initial-value problems, it is often useful to compute not only the solution, but also its partial derivatives with respect to the initial conditions and/or the parameters. The derivatives with respect to the initial conditions, for instance, are needed for the computation of [chaos indicators](https://en.wikipedia.org/wiki/Lyapunov_exponent) and for [uncertainty propagation](https://en.wikipedia.org/wiki/Propagation_of_uncertainty), and they can also be used to propagate a small neighborhood in phase space around the initial conditions. The derivatives with respect to the parameters of the system are required when formulating optimisation and inversion problems such as orbit determination, trajectory optimisation and training of neural networks in [neural ODEs](./NeuralODEs_I.ipynb).\n",
"\n",
"There are two main methods for the computation of the partial derivatives. The first one is based on the application of automatic differentiation (AD) techniques directly to the numerical integration algorithm. This can be done either by replacing the algebra of floating-point numbers with the algebra of (generalised) [dual numbers](https://en.wikipedia.org/wiki/Dual_number) (aka truncated Taylor polynomials), or via [differentiable programming](https://en.wikipedia.org/wiki/Differentiable_programming) techniques. The former approach is used by libraries such as [pyaudi](https://github.com/darioizzo/audi), [desolver](https://github.com/Microno95/desolver) and [TaylorIntegration.jl](https://docs.sciml.ai/TaylorIntegration/stable/jet_transport/), while differentiable programming is popular in the machine learning community with projects such as [PyTorch](https://pytorch.org/), [JAX](https://jax.readthedocs.io/en/latest/) and [TensorFlow](https://www.tensorflow.org/). Differentiable programming is also popular in the [Julia programming language](https://en.wikipedia.org/wiki/Julia_(programming_language)) community.\n",
"\n",
Expand Down Expand Up @@ -757,7 +757,7 @@
"source": [
"## A note on computational efficiency\n",
"\n",
"{class}`~heyoka.var_ode_sys` uses internally the {func}`~heyoka.diff_tensors()` and {class}`~heyoka.dtens` API to formulate the variational equations. This means that the computation of the symbolic derivatives is performed in an efficient manner. For instance, reverse-mode symbolic automatic differentiation will be employed when computing the first-order variationals of ODE systems containing a large number of parameters (e.g., in [neural ODEs](./NeuralODEs.ipynb)).\n",
"{class}`~heyoka.var_ode_sys` uses internally the {func}`~heyoka.diff_tensors()` and {class}`~heyoka.dtens` API to formulate the variational equations. This means that the computation of the symbolic derivatives is performed in an efficient manner. For instance, reverse-mode symbolic automatic differentiation will be employed when computing the first-order variationals of ODE systems containing a large number of parameters (e.g., in [neural ODEs](./NeuralODEs_I.ipynb)).\n",
"\n",
"See the [computing derivatives](<./computing_derivatives.ipynb>) tutorial for a more in-depth discussion of how heyoka.py computes symbolic derivatives."
]
Expand Down

0 comments on commit 93727f5

Please sign in to comment.