Skip to content

Commit

Permalink
update links to the new tapas repository
Browse files Browse the repository at this point in the history
  • Loading branch information
LegrandNico committed Nov 26, 2024
1 parent eb05544 commit 8ae1fd6
Show file tree
Hide file tree
Showing 20 changed files with 60 additions and 60 deletions.
20 changes: 10 additions & 10 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,15 +1,15 @@
[![pre-commit](https://img.shields.io/badge/pre--commit-enabled-brightgreen?logo=pre-commit&logoColor=white)](https://github.com/pre-commit/pre-commit) [![license](https://img.shields.io/badge/License-GPL%20v3-blue.svg)](https://github.com/ilabcode/pyhgf/blob/master/LICENSE) [![codecov](https://codecov.io/gh/ilabcode/pyhgf/branch/master/graph/badge.svg)](https://codecov.io/gh/ilabcode/pyhgf) [![black](https://img.shields.io/badge/code%20style-black-000000.svg)](https://github.com/psf/black) [![mypy](http://www.mypy-lang.org/static/mypy_badge.svg)](http://mypy-lang.org/) [![Imports: isort](https://img.shields.io/badge/%20imports-isort-%231674b1?style=flat&labelColor=ef8336)](https://pycqa.github.io/isort/) [![pip](https://badge.fury.io/py/pyhgf.svg)](https://badge.fury.io/py/pyhgf)
[![pre-commit](https://img.shields.io/badge/pre--commit-enabled-brightgreen?logo=pre-commit&logoColor=white)](https://github.com/pre-commit/pre-commit) [![license](https://img.shields.io/badge/License-GPL%20v3-blue.svg)](https://github.com/ComputationalPsychiatry/pyhgf/blob/master/LICENSE) [![codecov](https://codecov.io/gh/ComputationalPsychiatry/pyhgf/branch/master/graph/badge.svg)](https://codecov.io/gh/ComputationalPsychiatry/pyhgf) [![black](https://img.shields.io/badge/code%20style-black-000000.svg)](https://github.com/psf/black) [![mypy](http://www.mypy-lang.org/static/mypy_badge.svg)](http://mypy-lang.org/) [![Imports: isort](https://img.shields.io/badge/%20imports-isort-%231674b1?style=flat&labelColor=ef8336)](https://pycqa.github.io/isort/) [![pip](https://badge.fury.io/py/pyhgf.svg)](https://badge.fury.io/py/pyhgf)

# PyHGF: A Neural Network Library for Predictive Coding


<img src="https://raw.githubusercontent.com/ilabcode/pyhgf/master/docs/source/images/logo.png" align="left" alt="hgf" width="160" HSPACE=10>
<img src="https://raw.githubusercontent.com/ComputationalPsychiatry/pyhgf/master/docs/source/images/logo.png" align="left" alt="hgf" width="160" HSPACE=10>


PyHGF is a Python library for creating and manipulating dynamic probabilistic networks for predictive coding. These networks approximate Bayesian inference by optimizing beliefs through the diffusion of predictions and precision-weighted prediction errors. The network structure remains flexible during message-passing steps, allowing for dynamic adjustments. They can be used as a biologically plausible cognitive model in computational neuroscience or as a generalization of Bayesian filtering for designing efficient, modular decision-making agents. The default implementation supports the generalized Hierarchical Gaussian Filters (gHGF, Weber et al., 2024), but the framework is designed to be adaptable to other algorithms. Built on top of JAX, the core functions are differentiable and JIT-compiled where applicable. The library is optimized for modularity and ease of use, allowing seamless integration with other libraries in the ecosystem for Bayesian inference and optimization. Additionally, a binding with an implementation in Rust is under active development, which will further enhance flexibility during inference. You can find the method paper describing the toolbox [here](https://arxiv.org/abs/2410.09206) and the method paper describing the gHGF, which is the main framework currently supported by the toolbox [here](https://arxiv.org/abs/2305.10937).

* 📖 [API Documentation](https://ilabcode.github.io/pyhgf/api.html)
* ✏️ [Tutorials and examples](https://ilabcode.github.io/pyhgf/learn.html)
* 📖 [API Documentation](https://computationalpsychiatry.github.io/pyhgf/api.html)
* ✏️ [Tutorials and examples](https://computationalpsychiatryionalpsychiatry.github.io/pyhgf/learn.html)

## Getting started

Expand All @@ -21,7 +21,7 @@ The last official release can be downloaded from PIP:

The current version under development can be installed from the master branch of the GitHub folder:

`pip install “git+https://github.com/ilabcode/pyhgf.git”`
`pip install “git+https://github.com/ComputationalPsychiatry/pyhgf.git”`

### How does it work?

Expand All @@ -32,12 +32,12 @@ Dynamic networks can be defined as a tuple containing the following variables:
* A set of update functions. An update function receive a network tuple and returns an updated network tuple.
* An update sequence (tuple) that defines the order and target of the update functions.

<img src="https://raw.githubusercontent.com/ilabcode/pyhgf/master/docs/source/images/graph_network.svg" align="center" alt="networks" style="width:100%; height:auto;">
<img src="https://raw.githubusercontent.com/ComputationalPsychiatry/pyhgf/master/docs/source/images/graph_network.svg" align="center" alt="networks" style="width:100%; height:auto;">


You can find a deeper introduction to how to create and manipulate networks under the following link:

* 🎓 [Creating and manipulating networks of probabilistic nodes](https://ilabcode.github.io/pyhgf/notebooks/0.2-Creating_networks.html)
* 🎓 [Creating and manipulating networks of probabilistic nodes](https://computationalpsychiatry.github.io/pyhgf/notebooks/0.2-Creating_networks.html)


### The Generalized Hierarchical Gaussian Filter
Expand All @@ -46,7 +46,7 @@ Generalized Hierarchical Gaussian Filters (gHGF) are specific instances of dynam

You can find a deeper introduction on how does the gHGF works under the following link:

* 🎓 [Introduction to the Hierarchical Gaussian Filter](https://ilabcode.github.io/pyhgf/notebooks/0.1-Theory.html#theory)
* 🎓 [Introduction to the Hierarchical Gaussian Filter](https://computationalpsychiatry.github.io/pyhgf/notebooks/0.1-Theory.html#theory)

### Model fitting

Expand All @@ -73,7 +73,7 @@ hgf.input_data(input_data=u)
hgf.plot_trajectories();
```

![png](https://raw.githubusercontent.com/ilabcode/pyhgf/master/docs/source/images/trajectories.png)
![png](https://raw.githubusercontent.com/ComputationalPsychiatry/pyhgf/master/docs/source/images/trajectories.png)

```python
from pyhgf.response import binary_softmax_inverse_temperature
Expand All @@ -93,7 +93,7 @@ print(f"Sum of surprises = {surprise.sum()}")

## Acknowledgments

This implementation of the Hierarchical Gaussian Filter was inspired by the original [Matlab HGF Toolbox](https://translationalneuromodeling.github.io/tapas). A Julia implementation is also available [here](https://github.com/ilabcode/HGF.jl).
This implementation of the Hierarchical Gaussian Filter was inspired by the original [Matlab HGF Toolbox](https://translationalneuromodeling.github.io/tapas). A Julia implementation is also available [here](https://github.com/ComputationalPsychiatry/HGF.jl).

## References

Expand Down
2 changes: 1 addition & 1 deletion docs/source/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -95,7 +95,7 @@
"icon_links": [
dict(
name="GitHub",
url="https://github.com/ilabcode/pyhgf",
url="https://github.com/ComputationalPsychiatry/pyhgf",
icon="fa-brands fa-square-github",
),
dict(
Expand Down
20 changes: 10 additions & 10 deletions docs/source/index.md
Original file line number Diff line number Diff line change
@@ -1,14 +1,14 @@
[![pre-commit](https://img.shields.io/badge/pre--commit-enabled-brightgreen?logo=pre-commit&logoColor=white)](https://github.com/pre-commit/pre-commit) [![license](https://img.shields.io/badge/License-GPL%20v3-blue.svg)](https://github.com/ilabcode/pyhgf/blob/master/LICENSE) [![codecov](https://codecov.io/gh/ilabcode/pyhgf/branch/master/graph/badge.svg)](https://codecov.io/gh/ilabcode/pyhgf) [![black](https://img.shields.io/badge/code%20style-black-000000.svg)](https://github.com/psf/black) [![mypy](http://www.mypy-lang.org/static/mypy_badge.svg)](http://mypy-lang.org/) [![Imports: isort](https://img.shields.io/badge/%20imports-isort-%231674b1?style=flat&labelColor=ef8336)](https://pycqa.github.io/isort/) [![pip](https://badge.fury.io/py/pyhgf.svg)](https://badge.fury.io/py/pyhgf)
[![pre-commit](https://img.shields.io/badge/pre--commit-enabled-brightgreen?logo=pre-commit&logoColor=white)](https://github.com/pre-commit/pre-commit) [![license](https://img.shields.io/badge/License-GPL%20v3-blue.svg)](https://github.com/ComputationalPsychiatry/pyhgf/blob/master/LICENSE) [![codecov](https://codecov.io/gh/ComputationalPsychiatry/pyhgf/branch/master/graph/badge.svg)](https://codecov.io/gh/ComputationalPsychiatry/pyhgf) [![black](https://img.shields.io/badge/code%20style-black-000000.svg)](https://github.com/psf/black) [![mypy](http://www.mypy-lang.org/static/mypy_badge.svg)](http://mypy-lang.org/) [![Imports: isort](https://img.shields.io/badge/%20imports-isort-%231674b1?style=flat&labelColor=ef8336)](https://pycqa.github.io/isort/) [![pip](https://badge.fury.io/py/pyhgf.svg)](https://badge.fury.io/py/pyhgf)

# PyHGF: A Neural Network Library for Predictive Coding

<img src="https://raw.githubusercontent.com/ilabcode/pyhgf/master/docs/source/images/logo.png" align="left" alt="hgf" width="160" HSPACE=10>
<img src="https://raw.githubusercontent.com/ComputationalPsychiatry/pyhgf/master/docs/source/images/logo.png" align="left" alt="hgf" width="160" HSPACE=10>


PyHGF is a Python library for creating and manipulating dynamic probabilistic networks for predictive coding. These networks approximate Bayesian inference by optimizing beliefs through the diffusion of predictions and precision-weighted prediction errors. The network structure remains flexible during message-passing steps, allowing for dynamic adjustments. They can be used as a biologically plausible cognitive model in computational neuroscience or as a generalization of Bayesian filtering for designing efficient, modular decision-making agents. The default implementation supports the generalized Hierarchical Gaussian Filters (gHGF, Weber et al., 2024), but the framework is designed to be adaptable to other algorithms. Built on top of JAX, the core functions are differentiable and JIT-compiled where applicable. The library is optimized for modularity and ease of use, allowing seamless integration with other libraries in the ecosystem for Bayesian inference and optimization. Additionally, a binding with an implementation in Rust is under active development, which will further enhance flexibility during inference. You can find the method paper describing the toolbox [here](https://arxiv.org/abs/2410.09206) and the method paper describing the gHGF, which is the main framework currently supported by the toolbox [here](https://arxiv.org/abs/2305.10937).

* 📖 [API Documentation](https://ilabcode.github.io/pyhgf/api.html)
* ✏️ [Tutorials and examples](https://ilabcode.github.io/pyhgf/learn.html)
* 📖 [API Documentation](https://computationalpsychiatry.github.io/pyhgf/api.html)
* ✏️ [Tutorials and examples](https://computationalpsychiatry.github.io/pyhgf/learn.html)

## Getting started

Expand All @@ -23,7 +23,7 @@ pip install pyhgf
The current version under development can be installed from the master branch of the GitHub folder:

```bash
pip install “git+https://github.com/ilabcode/pyhgf.git”
pip install “git+https://github.com/ComputationalPsychiatry/pyhgf.git”
```

### How does it work?
Expand All @@ -35,12 +35,12 @@ Dynamic networks can be defined as a tuple containing the following variables:
* A set of update functions. An update function receive a network tuple and returns an updated network tuple.
* An update sequence (tuple) that defines the order and target of the update functions.

<img src="https://raw.githubusercontent.com/ilabcode/pyhgf/master/docs/source/images/graph_network.svg" align="center" alt="networks" style="width:100%; height:auto;">
<img src="https://raw.githubusercontent.com/ComputationalPsychiatry/pyhgf/master/docs/source/images/graph_network.svg" align="center" alt="networks" style="width:100%; height:auto;">


You can find a deeper introduction to how to create and manipulate networks under the following link:

* 🎓 [Creating and manipulating networks of probabilistic nodes](https://ilabcode.github.io/pyhgf/notebooks/0.2-Creating_networks.html)
* 🎓 [Creating and manipulating networks of probabilistic nodes](https://computationalpsychiatry.github.io/pyhgf/notebooks/0.2-Creating_networks.html)


### The Generalized Hierarchical Gaussian Filter
Expand All @@ -49,7 +49,7 @@ Generalized Hierarchical Gaussian Filters (gHGF) are specific instances of dynam

You can find a deeper introduction on how does the gHGF works under the following link:

* 🎓 [Introduction to the Hierarchical Gaussian Filter](https://ilabcode.github.io/pyhgf/notebooks/0.1-Theory.html#theory)
* 🎓 [Introduction to the Hierarchical Gaussian Filter](https://computationalpsychiatry.github.io/pyhgf/notebooks/0.1-Theory.html#theory)

### Model fitting

Expand All @@ -76,7 +76,7 @@ hgf.input_data(input_data=u)
hgf.plot_trajectories();
```

![png](https://raw.githubusercontent.com/ilabcode/pyhgf/master/docs/source/images/trajectories.png)
![png](https://raw.githubusercontent.com/ComputationalPsychiatry/pyhgf/master/docs/source/images/trajectories.png)

```python
from pyhgf.response import binary_softmax_inverse_temperature
Expand All @@ -96,7 +96,7 @@ print(f"Sum of surprises = {surprise.sum()}")

## Acknowledgments

This implementation of the Hierarchical Gaussian Filter was inspired by the original [Matlab HGF Toolbox](https://translationalneuromodeling.github.io/tapas). A Julia implementation is also available [here](https://github.com/ilabcode/HGF.jl).
This implementation of the Hierarchical Gaussian Filter was inspired by the original [Matlab HGF Toolbox](https://translationalneuromodeling.github.io/tapas). A Julia implementation is also available [here](https://github.com/ComputationalPsychiatry/HGF.jl).

## References

Expand Down
2 changes: 1 addition & 1 deletion docs/source/learn.md
Original file line number Diff line number Diff line change
Expand Up @@ -50,7 +50,7 @@ notebooks/Exercise_2_Bayesian_reinforcement_learning.ipynb

# Learn

In this section, you can find tutorial notebooks that describe the internals of pyhgf, the theory behind the Hierarchical Gaussian filter, and step-by-step application and use cases of the model. At the beginning of every tutorial, you will find a badge [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/ilabcode/pyhgf/blob/master/docs/source/notebooks/0.1-Creating_networks.ipynb) to run the notebook interactively in a Google Colab session.
In this section, you can find tutorial notebooks that describe the internals of pyhgf, the theory behind the Hierarchical Gaussian filter, and step-by-step application and use cases of the model. At the beginning of every tutorial, you will find a badge [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/ComputationalPsychiatry/pyhgf/blob/master/docs/source/notebooks/0.1-Creating_networks.ipynb) to run the notebook interactively in a Google Colab session.

## Theory

Expand Down
4 changes: 2 additions & 2 deletions docs/source/notebooks/0.1-Theory.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@
"(theory)=\n",
"# Introduction to the Generalised Hierarchical Gaussian Filter\n",
"\n",
"[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/ilabcode/pyhgf/blob/master/docs/source/notebooks/0.1-Theory.ipynb)\n",
"[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/ComputationalPsychiatry/pyhgf/blob/master/docs/source/notebooks/0.1-Theory.ipynb)\n",
"\n",
"The Hierarchical Gaussian Filter is a Bayesian model of belief updating under uncertainty in which the volatility of the environment is encoded by a hierarchy of probability distributions. A generalised version of this model {cite:p}`weber:2023` has further framed this into a neural network framework of probabilistic state nodes interacting with each other. At the heart of both frameworks lies a generative model that consist in a hierarchy of {term}`Gaussian Random Walk`. The inversion of this model (i.e. estimating parameters from observed values) leads to the derivation of prediction errors and posterior updates that can turn these networks into learning systems. To fully understand this model, it is therefore central to start with the simplest case, the implied generative model, which can be seen as the probabilistic structure that generates observations.\n",
"\n",
Expand Down Expand Up @@ -664,7 +664,7 @@
"source": [
"### The propagation of prediction and prediction errors\n",
"\n",
"Having described the model as a specific configuration of predictive nodes offer many advantages, especially in term of modularity for the user. However, the model itself is not limited to the description of the generative process that we covered in the previous examples. The most interesting, and also the more complex, part of the modelling consists of the capability for the network to update the hierarchical structure in a Bayesian optimal way as new observations are presented. These steps are defined by a set of simple, one-step update equations that represent changes in beliefs about the hidden states (i.e. the sufficient statistics of the nodes) specified in the generative model. These equations were first described in {cite:t}`2011:mathys`, and the update equations for volatility and value coupling in the generalized Hierarchical Gaussian filter (on which most of the update functions in [pyhgf](https://github.com/ilabcode/pyhgf) are based) have been described in {cite:p}`weber:2023`. The exact computations in each step especially depend on the nature of the coupling (via {term}`VAPE`s vs. {term}`VOPE`s) between the parent and children nodes. It is beyond the scope of this tutorial to dive into the derivation of these steps and we refer the interested reader to the mentioned papers. Here, we provide a general overview of the dynamic of the update sequence that supports belief updating. The computations triggered by any observation at each time point can be ordered in time as shown in the belief update algorithm."
"Having described the model as a specific configuration of predictive nodes offer many advantages, especially in term of modularity for the user. However, the model itself is not limited to the description of the generative process that we covered in the previous examples. The most interesting, and also the more complex, part of the modelling consists of the capability for the network to update the hierarchical structure in a Bayesian optimal way as new observations are presented. These steps are defined by a set of simple, one-step update equations that represent changes in beliefs about the hidden states (i.e. the sufficient statistics of the nodes) specified in the generative model. These equations were first described in {cite:t}`2011:mathys`, and the update equations for volatility and value coupling in the generalized Hierarchical Gaussian filter (on which most of the update functions in [pyhgf](https://github.com/ComputationalPsychiatry/pyhgf) are based) have been described in {cite:p}`weber:2023`. The exact computations in each step especially depend on the nature of the coupling (via {term}`VAPE`s vs. {term}`VOPE`s) between the parent and children nodes. It is beyond the scope of this tutorial to dive into the derivation of these steps and we refer the interested reader to the mentioned papers. Here, we provide a general overview of the dynamic of the update sequence that supports belief updating. The computations triggered by any observation at each time point can be ordered in time as shown in the belief update algorithm."
]
},
{
Expand Down
Loading

0 comments on commit 8ae1fd6

Please sign in to comment.