Skip to content

Commit

Permalink
Rename project to Φ-ML
Browse files Browse the repository at this point in the history
  • Loading branch information
holl- committed Aug 11, 2023
1 parent 94c28ba commit dfe1500
Show file tree
Hide file tree
Showing 102 changed files with 754 additions and 754 deletions.
6 changes: 3 additions & 3 deletions .coveragerc
Original file line number Diff line number Diff line change
Expand Up @@ -25,9 +25,9 @@ exclude_also =


omit =
ml4s/backend/tensorflow/_tf_cuda_resample.py
ml4s/backend/tensorflow/_compile_cuda.py
ml4s/math/fit.py
phiml/backend/tensorflow/_tf_cuda_resample.py
phiml/backend/tensorflow/_compile_cuda.py
phiml/math/fit.py


[run]
Expand Down
2 changes: 1 addition & 1 deletion .github/workflows/unit-tests.yml
Original file line number Diff line number Diff line change
Expand Up @@ -37,4 +37,4 @@ jobs:
continue-on-error: true
run: |
pylint --rcfile=./tests/.pylintrc tests
pylint --rcfile=./ml4s/.pylintrc ml4s
pylint --rcfile=./phiml/.pylintrc phiml
2 changes: 1 addition & 1 deletion .github/workflows/update-gh-pages.yml
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@ jobs:
pip list
- name: Build API with pdoc3
run: pdoc --html --output-dir docs --force ml4s
run: pdoc --html --output-dir docs --force phiml

- name: Build static HTML for Jupyter Notebooks
run: |
Expand Down
10 changes: 5 additions & 5 deletions CONTRIBUTING.md
Original file line number Diff line number Diff line change
@@ -1,11 +1,11 @@
# Contributing to ML4Science
# Contributing to Φ<sub>ML</sub>
All contributions are welcome!
You can mail the developers or get in touch on GitHub.


## Types of contributions we're looking for

We're open to all kind of contributions that improve or extend the ML4Science library.
We're open to all kind of contributions that improve or extend the Φ<sub>ML</sub> library.
We especially welcome

- Bug fixes
Expand All @@ -18,7 +18,7 @@ We especially welcome
We recommend you to contact the developers before starting your contribution.
There may already be similar internal work or planned changes that would affect how to code the contribution.

To contribute code, fork ML4Science on GitHub, make your changes, and submit a pull request.
To contribute code, fork Φ<sub>ML</sub> on GitHub, make your changes, and submit a pull request.
Make sure that your contribution passes all tests.


Expand All @@ -31,12 +31,12 @@ We would like to add the rule *Concise is better than repetitive.*

We use PyLint for static code analysis with specific configuration files for the
[tests](../tests/.pylintrc) and the
[code base](../ml4s/.pylintrc).
[code base](../phiml/.pylintrc).
PyLint is part of the automatic testing pipeline.
The warning log can be viewed online by selecting a `Tests` job and expanding the pylint output.

### Docstrings
The [API documentation](https://tum-pbs.github.io/ML4Science/) for ML4Science is generated using [pdoc](https://pdoc3.github.io/pdoc/).
The [API documentation](https://tum-pbs.github.io/PhiML/) for Φ<sub>ML</sub> is generated using [pdoc](https://pdoc3.github.io/pdoc/).
We use [Google style docstrings](https://google.github.io/styleguide/pyguide.html#38-comments-and-docstrings)
with Markdown formatting.

Expand Down
4 changes: 2 additions & 2 deletions MANIFEST.in
Original file line number Diff line number Diff line change
@@ -1,3 +1,3 @@
include ml4s/VERSION
include phiml/VERSION
include documentation/Package_Info.md
recursive-include ml4s/backend/tf/cuda/build *
recursive-include phiml/backend/tf/cuda/build *
106 changes: 53 additions & 53 deletions README.md

Large diffs are not rendered by default.

50 changes: 25 additions & 25 deletions docs/Autodiff.ipynb

Large diffs are not rendered by default.

38 changes: 19 additions & 19 deletions docs/Convert.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -9,14 +9,14 @@
}
},
"source": [
"# Using Multiple Backends via ML4Science\n",
"# Using Multiple Backends via Φ<sub>ML</sub>\n",
"\n",
"[![Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/tum-pbs/ML4Science/blob/main/docs/Convert.ipynb)\n",
"&nbsp; • &nbsp; [🌐 **ML4Science**](https://github.com/tum-pbs/ML4Science)\n",
"&nbsp; • &nbsp; [📖 **Documentation**](https://tum-pbs.github.io/ML4Science/)\n",
"&nbsp; • &nbsp; [🔗 **API**](https://tum-pbs.github.io/ML4Science/ml4s)\n",
"[![Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/tum-pbs/PhiML/blob/main/docs/Convert.ipynb)\n",
"&nbsp; • &nbsp; [🌐 **Φ<sub>ML</sub>**](https://github.com/tum-pbs/PhiML)\n",
"&nbsp; • &nbsp; [📖 **Documentation**](https://tum-pbs.github.io/PhiML/)\n",
"&nbsp; • &nbsp; [🔗 **API**](https://tum-pbs.github.io/PhiML/phiml)\n",
"&nbsp; • &nbsp; [**▶ Videos**]()\n",
"&nbsp; • &nbsp; [<img src=\"images/colab_logo_small.png\" height=4>](https://colab.research.google.com/github/tum-pbs/ML4Science/blob/main/docs/Examples.ipynb) [**Examples**](https://tum-pbs.github.io/ML4Science/Examples.html)"
"&nbsp; • &nbsp; [<img src=\"images/colab_logo_small.png\" height=4>](https://colab.research.google.com/github/tum-pbs/PhiML/blob/main/docs/Examples.ipynb) [**Examples**](https://tum-pbs.github.io/PhiML/Examples.html)"
]
},
{
Expand All @@ -25,7 +25,7 @@
"outputs": [],
"source": [
"%%capture\n",
"!pip install ml4s"
"!pip install phiml"
],
"metadata": {
"collapsed": false,
Expand All @@ -39,11 +39,11 @@
"source": [
"## How Backends are Chosen\n",
"\n",
"ML4Science can execute your instructions using Jax, PyTorch, TensorFlow or NumPy.\n",
"Φ<sub>ML</sub> can execute your instructions using Jax, PyTorch, TensorFlow or NumPy.\n",
"Which library is used generally depends on what tensors you pass to it.\n",
"Calling a function with PyTorch tensors will always invoke the corresponding PyTorch routine.\n",
"\n",
"Let's first look at the function [`math.use()`](ml4s/math#ml4s.math.use) which lets you set a global default backend."
"Let's first look at the function [`math.use()`](phiml/math#phiml.math.use) which lets you set a global default backend."
],
"metadata": {
"collapsed": false,
Expand All @@ -66,7 +66,7 @@
}
],
"source": [
"from ml4s import math\n",
"from phiml import math\n",
"\n",
"math.use('jax')"
],
Expand All @@ -80,7 +80,7 @@
{
"cell_type": "markdown",
"source": [
"From now on, new tensors created by ML4Science will be backed by Jax arrays."
"From now on, new tensors created by Φ<sub>ML</sub> will be backed by Jax arrays."
],
"metadata": {
"collapsed": false,
Expand Down Expand Up @@ -275,7 +275,7 @@
"source": [
"## Converting Tensors\n",
"\n",
"We can move tensors between different backends using [`math.convert()`](ml4s/math#ml4s.math.convert).\n",
"We can move tensors between different backends using [`math.convert()`](phiml/math#phiml.math.convert).\n",
"This will use [DLPack](https://github.com/dmlc/dlpack) under-the-hood when converting between ML backends.\n",
"\n",
"For the target backend, you can pass the module, module name or Backend object."
Expand Down Expand Up @@ -349,9 +349,9 @@
}
],
"source": [
"from ml4s.backend.torch import TORCH\n",
"from ml4s.backend.jax import JAX\n",
"from ml4s.backend.tensorflow import TENSORFLOW\n",
"from phiml.backend.torch import TORCH\n",
"from phiml.backend.jax import JAX\n",
"from phiml.backend.tensorflow import TENSORFLOW\n",
"\n",
"math.convert(jax_tensor, TORCH).default_backend"
],
Expand Down Expand Up @@ -406,11 +406,11 @@
"\n",
"NumPy functions are not differentiable but it nevertheless plays an important role in [representing constants](NumPy_Constants.html) in your code.\n",
"\n",
"[🌐 **ML4Science**](https://github.com/tum-pbs/ML4Science)\n",
"&nbsp; • &nbsp; [📖 **Documentation**](https://tum-pbs.github.io/ML4Science/)\n",
"&nbsp; • &nbsp; [🔗 **API**](https://tum-pbs.github.io/ML4Science/ml4s)\n",
"[🌐 **Φ<sub>ML</sub>**](https://github.com/tum-pbs/PhiML)\n",
"&nbsp; • &nbsp; [📖 **Documentation**](https://tum-pbs.github.io/PhiML/)\n",
"&nbsp; • &nbsp; [🔗 **API**](https://tum-pbs.github.io/PhiML/phiml)\n",
"&nbsp; • &nbsp; [**▶ Videos**]()\n",
"&nbsp; • &nbsp; [<img src=\"images/colab_logo_small.png\" height=4>](https://colab.research.google.com/github/tum-pbs/ML4Science/blob/main/docs/Examples.ipynb) [**Examples**](https://tum-pbs.github.io/ML4Science/Examples.html)"
"&nbsp; • &nbsp; [<img src=\"images/colab_logo_small.png\" height=4>](https://colab.research.google.com/github/tum-pbs/PhiML/blob/main/docs/Examples.ipynb) [**Examples**](https://tum-pbs.github.io/PhiML/Examples.html)"
],
"metadata": {
"collapsed": false,
Expand Down
40 changes: 20 additions & 20 deletions docs/Data_Types.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -3,20 +3,20 @@
{
"cell_type": "markdown",
"source": [
"# Data Types in ML4Science\n",
"# Data Types in Φ<sub>ML</sub>\n",
"\n",
"[![Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/tum-pbs/ML4Science/blob/main/docs/Data_Types.ipynb)\n",
"&nbsp; • &nbsp; [🌐 **ML4Science**](https://github.com/tum-pbs/ML4Science)\n",
"&nbsp; • &nbsp; [📖 **Documentation**](https://tum-pbs.github.io/ML4Science/)\n",
"&nbsp; • &nbsp; [🔗 **API**](https://tum-pbs.github.io/ML4Science/ml4s)\n",
"[![Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/tum-pbs/PhiML/blob/main/docs/Data_Types.ipynb)\n",
"&nbsp; • &nbsp; [🌐 **Φ<sub>ML</sub>**](https://github.com/tum-pbs/PhiML)\n",
"&nbsp; • &nbsp; [📖 **Documentation**](https://tum-pbs.github.io/PhiML/)\n",
"&nbsp; • &nbsp; [🔗 **API**](https://tum-pbs.github.io/PhiML/phiml)\n",
"&nbsp; • &nbsp; [**▶ Videos**]()\n",
"&nbsp; • &nbsp; [<img src=\"images/colab_logo_small.png\" height=4>](https://colab.research.google.com/github/tum-pbs/ML4Science/blob/main/docs/Examples.ipynb) [**Examples**](https://tum-pbs.github.io/ML4Science/Examples.html)\n",
"&nbsp; • &nbsp; [<img src=\"images/colab_logo_small.png\" height=4>](https://colab.research.google.com/github/tum-pbs/PhiML/blob/main/docs/Examples.ipynb) [**Examples**](https://tum-pbs.github.io/PhiML/Examples.html)\n",
"\n",
"*Need to differentiate but your input is an `int` tensor?\n",
"Need an `int64` tensor but got `int32`?\n",
"Need a `tensor` but got an `ndarray`?\n",
"Want an `ndarray` but your tensor is bound in a computational graph on the GPU?\n",
"Worry no longer for ML4Science has you covered!*"
"Worry no longer for Φ<sub>ML</sub> has you covered!*"
],
"metadata": {
"collapsed": false,
Expand All @@ -31,7 +31,7 @@
"outputs": [],
"source": [
"%%capture\n",
"!pip install ml4s"
"!pip install phiml"
],
"metadata": {
"collapsed": false,
Expand All @@ -45,9 +45,9 @@
"source": [
"## Floating Point Precision\n",
"\n",
"A major difference between ML4Science and its backends is the handling of floating point (FP) precision.\n",
"A major difference between Φ<sub>ML</sub> and its backends is the handling of floating point (FP) precision.\n",
"NumPy automatically casts arrays to the highest precision and other ML libraries will raise errors if data types do not match.\n",
"Instead, ML4Science lets you set the FP precision globally using [`set_global_precision(64)`](ml4s/math/#ml4s.math.set_global_precision)\n",
"Instead, Φ<sub>ML</sub> lets you set the FP precision globally using [`set_global_precision(64)`](phiml/math/#phiml.math.set_global_precision)\n",
"or by context and automatically casts tensors to that precision when needed.\n",
"The default is FP32 (single precision).\n",
"Let's set the global precision to FP64 (double precision)!"
Expand All @@ -64,7 +64,7 @@
"execution_count": 5,
"outputs": [],
"source": [
"from ml4s import math\n",
"from phiml import math\n",
"\n",
"math.set_global_precision(64) # double precision"
],
Expand Down Expand Up @@ -113,7 +113,7 @@
{
"cell_type": "markdown",
"source": [
"We can run parts of our code with a different precision by executing them within a [`precision`](ml4s/math/#ml4s.math.precision) block:"
"We can run parts of our code with a different precision by executing them within a [`precision`](phiml/math/#phiml.math.precision) block:"
],
"metadata": {
"collapsed": false,
Expand Down Expand Up @@ -148,7 +148,7 @@
{
"cell_type": "markdown",
"source": [
"ML4Science automatically casts tensors to the current precision level during operations.\n",
"Φ<sub>ML</sub> automatically casts tensors to the current precision level during operations.\n",
"Say we have a `float64` tensor but want to run 32-bit operations."
],
"metadata": {
Expand Down Expand Up @@ -186,7 +186,7 @@
"cell_type": "markdown",
"source": [
"Here, the tensor was cast to `float32` before applying the `sin` function.\n",
"If you want to explicitly cast a tensor to the current precision, use [`math.to_float()`](ml4s/math#ml4s.math.to_float)\n",
"If you want to explicitly cast a tensor to the current precision, use [`math.to_float()`](phiml/math#phiml.math.to_float)\n",
"\n",
"This system precludes any precision conflicts and you will never accidentally execute code with the wrong precision!"
],
Expand All @@ -202,7 +202,7 @@
"source": [
"## Specifying Data Types\n",
"\n",
"ML4Science provides a unified data type class, [`DType`](ml4s/math#ml4s.math.DType).\n",
"Φ<sub>ML</sub> provides a unified data type class, [`DType`](phiml/math#phiml.math.DType).\n",
"However, you only need to specify the data type when creating a new `Tensor` from scratch.\n",
"When wrapping an existing tensor, the data type is kept as-is."
],
Expand All @@ -227,7 +227,7 @@
}
],
"source": [
"from ml4s.math import DType\n",
"from phiml.math import DType\n",
"\n",
"math.zeros(dtype=DType(float, 16))"
],
Expand Down Expand Up @@ -361,11 +361,11 @@
"source": [
"## Further Reading\n",
"\n",
"[🌐 **ML4Science**](https://github.com/tum-pbs/ML4Science)\n",
"&nbsp; • &nbsp; [📖 **Documentation**](https://tum-pbs.github.io/ML4Science/)\n",
"&nbsp; • &nbsp; [🔗 **API**](https://tum-pbs.github.io/ML4Science/ml4s)\n",
"[🌐 **Φ<sub>ML</sub>**](https://github.com/tum-pbs/PhiML)\n",
"&nbsp; • &nbsp; [📖 **Documentation**](https://tum-pbs.github.io/PhiML/)\n",
"&nbsp; • &nbsp; [🔗 **API**](https://tum-pbs.github.io/PhiML/phiml)\n",
"&nbsp; • &nbsp; [**▶ Videos**]()\n",
"&nbsp; • &nbsp; [<img src=\"images/colab_logo_small.png\" height=4>](https://colab.research.google.com/github/tum-pbs/ML4Science/blob/main/docs/Examples.ipynb) [**Examples**](https://tum-pbs.github.io/ML4Science/Examples.html)"
"&nbsp; • &nbsp; [<img src=\"images/colab_logo_small.png\" height=4>](https://colab.research.google.com/github/tum-pbs/PhiML/blob/main/docs/Examples.ipynb) [**Examples**](https://tum-pbs.github.io/PhiML/Examples.html)"
],
"metadata": {
"collapsed": false,
Expand Down
36 changes: 18 additions & 18 deletions docs/Devices.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -3,14 +3,14 @@
{
"cell_type": "markdown",
"source": [
"# Device Handling in ML4Science\n",
"# Device Handling in Φ<sub>ML</sub>\n",
"\n",
"[![Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/tum-pbs/ML4Science/blob/main/docs/Devices.ipynb)\n",
"&nbsp; • &nbsp; [🌐 **ML4Science**](https://github.com/tum-pbs/ML4Science)\n",
"&nbsp; • &nbsp; [📖 **Documentation**](https://tum-pbs.github.io/ML4Science/)\n",
"&nbsp; • &nbsp; [🔗 **API**](https://tum-pbs.github.io/ML4Science/ml4s)\n",
"[![Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/tum-pbs/PhiML/blob/main/docs/Devices.ipynb)\n",
"&nbsp; • &nbsp; [🌐 **Φ<sub>ML</sub>**](https://github.com/tum-pbs/PhiML)\n",
"&nbsp; • &nbsp; [📖 **Documentation**](https://tum-pbs.github.io/PhiML/)\n",
"&nbsp; • &nbsp; [🔗 **API**](https://tum-pbs.github.io/PhiML/phiml)\n",
"&nbsp; • &nbsp; [**▶ Videos**]()\n",
"&nbsp; • &nbsp; [<img src=\"images/colab_logo_small.png\" height=4>](https://colab.research.google.com/github/tum-pbs/ML4Science/blob/main/docs/Examples.ipynb) [**Examples**](https://tum-pbs.github.io/ML4Science/Examples.html)\n",
"&nbsp; • &nbsp; [<img src=\"images/colab_logo_small.png\" height=4>](https://colab.research.google.com/github/tum-pbs/PhiML/blob/main/docs/Examples.ipynb) [**Examples**](https://tum-pbs.github.io/PhiML/Examples.html)\n",
"\n",
"This notebook is work in progress. It will explain\n",
"\n",
Expand All @@ -30,9 +30,9 @@
"outputs": [],
"source": [
"%%capture\n",
"!pip install ml4s\n",
"!pip install phiml\n",
"\n",
"from ml4s import math, backend"
"from phiml import math, backend"
],
"metadata": {
"collapsed": false,
Expand All @@ -46,8 +46,8 @@
"source": [
"## Compute Devices\n",
"\n",
"ML4Science abstracts [`ComputeDevices`](ml4s/backend/#ml4s.backend.ComputeDevice), such as CPUs, GPUs and TPUs.\n",
"You can obtain a list of available devices using [`Backend.list_devices()`](ml4s/backend/#ml4s.backend.Backend.list_devices)"
"Φ<sub>ML</sub> abstracts [`ComputeDevices`](phiml/backend/#phiml.backend.ComputeDevice), such as CPUs, GPUs and TPUs.\n",
"You can obtain a list of available devices using [`Backend.list_devices()`](phiml/backend/#phiml.backend.Backend.list_devices)"
],
"metadata": {
"collapsed": false,
Expand Down Expand Up @@ -106,7 +106,7 @@
}
],
"source": [
"from ml4s.backend import NUMPY\n",
"from phiml.backend import NUMPY\n",
"NUMPY.list_devices('CPU')"
],
"metadata": {
Expand Down Expand Up @@ -161,7 +161,7 @@
"name": "stderr",
"output_type": "stream",
"text": [
"/home/holl/PycharmProjects/ML4Science/ml4s/backend/_backend.py:245: RuntimeWarning: torch: Cannot select 'GPU' because no device of this type is available.\n",
"/home/holl/PycharmProjects/PhiML/phiml/backend/_backend.py:245: RuntimeWarning: torch: Cannot select 'GPU' because no device of this type is available.\n",
" warnings.warn(f\"{self.name}: Cannot select '{device}' because no device of this type is available.\", RuntimeWarning)\n"
]
},
Expand Down Expand Up @@ -263,7 +263,7 @@
{
"cell_type": "markdown",
"source": [
"[`math.to_device()`](ml4s/math#ml4s.math.to_device) also supports pytrees and data classes that contain tensors."
"[`math.to_device()`](phiml/math#phiml.math.to_device) also supports pytrees and data classes that contain tensors."
],
"metadata": {
"collapsed": false,
Expand All @@ -277,13 +277,13 @@
"source": [
"## Further Reading\n",
"\n",
"ML4Science also supports [moving tensors to different backend libraries](Convert.html) without copying them.\n",
"Φ<sub>ML</sub> also supports [moving tensors to different backend libraries](Convert.html) without copying them.\n",
"\n",
"[🌐 **ML4Science**](https://github.com/tum-pbs/ML4Science)\n",
"&nbsp; • &nbsp; [📖 **Documentation**](https://tum-pbs.github.io/ML4Science/)\n",
"&nbsp; • &nbsp; [🔗 **API**](https://tum-pbs.github.io/ML4Science/ml4s)\n",
"[🌐 **Φ<sub>ML</sub>**](https://github.com/tum-pbs/PhiML)\n",
"&nbsp; • &nbsp; [📖 **Documentation**](https://tum-pbs.github.io/PhiML/)\n",
"&nbsp; • &nbsp; [🔗 **API**](https://tum-pbs.github.io/PhiML/phiml)\n",
"&nbsp; • &nbsp; [**▶ Videos**]()\n",
"&nbsp; • &nbsp; [<img src=\"images/colab_logo_small.png\" height=4>](https://colab.research.google.com/github/tum-pbs/ML4Science/blob/main/docs/Examples.ipynb) [**Examples**](https://tum-pbs.github.io/ML4Science/Examples.html)"
"&nbsp; • &nbsp; [<img src=\"images/colab_logo_small.png\" height=4>](https://colab.research.google.com/github/tum-pbs/PhiML/blob/main/docs/Examples.ipynb) [**Examples**](https://tum-pbs.github.io/PhiML/Examples.html)"
],
"metadata": {
"collapsed": false,
Expand Down
Loading

0 comments on commit dfe1500

Please sign in to comment.