Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature/hierarchical decoding models #21

Open
wants to merge 54 commits into
base: feature/hierarchical_decoding
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
54 commits
Select commit Hold shift + click to select a range
a850d5f
allow masking of labels
themattinthehatt Oct 12, 2020
f00d2db
load labels_masks from data generator
themattinthehatt Oct 14, 2020
2f0dac7
more nan updates
themattinthehatt Oct 14, 2020
50b3ba2
doc updates
themattinthehatt Oct 15, 2020
9b1f010
multisession docs
themattinthehatt Oct 16, 2020
1a899fc
more multisession docs
themattinthehatt Oct 16, 2020
f6bc8be
make model class optional arg in load_best_model
themattinthehatt Oct 16, 2020
90bd14b
add flake8
themattinthehatt Oct 16, 2020
574415a
contributing file
themattinthehatt Oct 16, 2020
0c44790
get_best_model bug fix
themattinthehatt Oct 20, 2020
d006de2
move data loading util
themattinthehatt Oct 21, 2020
ad0808f
allow arhmms/decoders to utilize latents from non-ae models
themattinthehatt Oct 21, 2020
9bee2d3
small plotting updates
themattinthehatt Oct 27, 2020
99dd4f5
neural->labels decoding
themattinthehatt Oct 27, 2020
a416564
generalize get_test_metric
themattinthehatt Oct 28, 2020
b6a41c2
Update README.md
themattinthehatt Oct 29, 2020
46c2f74
small updates
themattinthehatt Nov 3, 2020
fed2799
allow decoding of ae latent motion energy
themattinthehatt Nov 13, 2020
08c881f
cleaning up integration test
themattinthehatt Nov 13, 2020
b909b28
multisession doc update
themattinthehatt Nov 16, 2020
682f166
small bug fix in ae video writer
themattinthehatt Nov 17, 2020
2f9006e
removing debugging printouts from sssvae
themattinthehatt Nov 18, 2020
65c09e7
Bump notebook from 6.0.3 to 6.1.5
dependabot[bot] Nov 18, 2020
9f12a7b
small plotting updates
themattinthehatt Dec 2, 2020
ca24683
ae plotting updates
themattinthehatt Dec 7, 2020
0448122
plotting function refactor
themattinthehatt Dec 7, 2020
f9273c5
save movie helper function
themattinthehatt Dec 7, 2020
75552bf
streamlined cond ae plotting functions round 1
themattinthehatt Jan 8, 2021
5743570
streamlined cond ae plotting functions round 2
themattinthehatt Jan 9, 2021
ff1e748
sss-vae docs
themattinthehatt Jan 11, 2021
bd0cc6f
doc updates
themattinthehatt Jan 11, 2021
2d9dccd
doc typo
themattinthehatt Jan 11, 2021
c3d277d
small bug fixes
themattinthehatt Jan 22, 2021
bfffdab
add flexibility to checking training splits
themattinthehatt Jan 22, 2021
c440887
sss-vae -> ps-vae
themattinthehatt Jan 26, 2021
c31e21e
ps-vae doc update
themattinthehatt Jan 26, 2021
92f6b85
Merge pull request #16 from ebatty/dependabot/pip/notebook-6.1.5
themattinthehatt Jan 27, 2021
13c2348
one-line fix for matplotlib display error
johnlyzhou Jan 28, 2021
4064e6a
added import statement
johnlyzhou Jan 28, 2021
fa0e47f
Merge pull request #18 from johnlyzhou/display-fix
themattinthehatt Jan 28, 2021
aeae5bc
fix directory error with new project
themattinthehatt Jan 29, 2021
2cf3eed
update default ae model arch
themattinthehatt Jan 29, 2021
3bd12d7
add batch norm params to hparams
themattinthehatt Jan 29, 2021
226c499
ps-vae example notebook
themattinthehatt Jan 31, 2021
aa7f43e
README update
themattinthehatt Jan 31, 2021
87ec44d
more updates to ps-vae example notebook
themattinthehatt Jan 31, 2021
b6debbc
data download instructions
themattinthehatt Feb 1, 2021
3fce701
updating tests for new ae default arch
themattinthehatt Feb 1, 2021
83ac4d3
Merge remote-tracking branch 'origin/master'
themattinthehatt Feb 1, 2021
d33fa5d
Bump notebook from 6.0.3 to 6.1.5 in /docs
dependabot[bot] Feb 1, 2021
6c24a1b
Merge pull request #20 from ebatty/dependabot/pip/docs/notebook-6.1.5
themattinthehatt Feb 1, 2021
35bf536
Update README.md
themattinthehatt Feb 1, 2021
d7c0676
conv and lstm hierarchical model file
nihaarshah Feb 5, 2021
77c97af
Adding a test file to new_branch
nihaarshah Feb 9, 2021
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
18 changes: 18 additions & 0 deletions .flake8
Original file line number Diff line number Diff line change
@@ -0,0 +1,18 @@
[flake8]
max-line-length = 99
ignore =
W504,
W503,
W605, # invalid escape sequence '\ '
E266,
E402, # module level import not at top of file
E226, # missing whitespace around arithmetic operator
exclude =
.git,
__pycache__,
__init__.py,
build,
dist,
docs/*
example/*
scratch/*
79 changes: 79 additions & 0 deletions CONTRIBUTING.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,79 @@
# How to contribute

If you're interested in contributing to the behavenet package, please contact the project
developer Matt Whiteway at m.whiteway ( at ) columbia.edu.

If you would like to add a new Pytorch model to the package, you can find more detailed information
[here](behavenet/models/README.md).

Before submitting a pull request, please follow these steps:

## Style

The behavenet package follows the PEP8 style guidelines, and allows for line lengths of up to 99
characters. To ensure that your code matches these guidelines, please flake your code using the
provided configuration file. You will need to first install flake8 in the behavenet conda
environment:

```bash
(behavenet) $: pip install flake8
```

Once all code, tests, and documentation are in place, you can run the flaker from from the project
directory:

```bash
(behavenet) $: flake8
```

## Documentation

Behavenet uses Sphinx and readthedocs to provide documentation to developers and users.

* complete all docstrings in new functions using google's style (see source code for examples)
* provide inline comments when necessary; the more the merrier
* add a new user guide if necessary (`docs/source/user_guide.[new_model].rst`)
* update data structure docs if adding to hdf5 (`docs/source/data_structure.rst`)
* add new hyperparams to glossary (`docs/source/glossary.rst`)

To check the documentation, you can compile it on your local machine first. To do so you will need
to first install sphinx in the behavenet conda environment:

```bash
(behavenet) $: pip install sphinx==3.2.0 sphinx_rtd_theme==0.4.3 sphinx-automodapi==0.12
```

To compile the documentation, from the behavenet project directory cd to `behavenet/docs` and run
the make file:

```bash
(behavenet) $: cd docs
(behavenet) $: make html
```

## Testing

Behavenet uses pytest to unit test the package; in addition, there is an integration script
provided to ensure the interlocking pieces play nicely. Please write unit tests for all new
(non-plotting) functions, and if you updated any existing functions please update the corresponding
unit tests.

To run the unit tests, first install pytest in the behavenet conda environment:

```bash
(behavenet) $: pip install pytest
```

Then, from the project directory, run:

```bash
(behavenet) $: pytest
```

To run the integration script:

```bash
(behavenet) $: python tests/integration.py
```

Running the integration test will take approximately 1 minute with a GPU.
4 changes: 1 addition & 3 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,5 @@
# BehaveNet

NOTE: This is a beta version, we will release the first stable version by early February.

BehaveNet is a probabilistic framework for the analysis of behavioral video and neural activity.
This framework provides tools for compression, segmentation, generation, and decoding of behavioral
videos. Please see the
Expand All @@ -12,7 +10,7 @@ for more information about how to install the software and begin fitting models

Additionally, we provide an example dataset and several jupyter notebooks that walk you through how
to download the dataset, fit models, and analyze the results. The jupyter notebooks can be found
[here](example).
[here](examples).

## Bibtex

Expand Down
2 changes: 1 addition & 1 deletion behavenet/data/__init__.py
Original file line number Diff line number Diff line change
@@ -1 +1 @@
"""Test string"""
"""Data module"""
23 changes: 10 additions & 13 deletions behavenet/data/data_generator.py
Original file line number Diff line number Diff line change
Expand Up @@ -192,14 +192,14 @@ def __init__(
self.n_trials = None
for i, signal in enumerate(signals):
if signal == 'images' or signal == 'neural' or signal == 'labels' or \
signal == 'labels_sc':
signal == 'labels_sc' or signal == 'labels_masks':
data_file = paths[i]
with h5py.File(data_file, 'r', libver='latest', swmr=True) as f:
self.n_trials = len(f[signal])
break
elif signal == 'ae_latents':
try:
latents = _load_pkl_dict(self.paths[signal], 'latents') #[0]
latents = _load_pkl_dict(self.paths[signal], 'latents')
except FileNotFoundError:
raise NotImplementedError(
('Could not open %s\nMust create ae latents from model;' +
Expand Down Expand Up @@ -274,7 +274,8 @@ def __getitem__(self, idx):
else:
sample[signal] = f[signal][str('trial_%04i' % idx)][()].astype(dtype)

elif signal == 'neural' or signal == 'labels' or signal == 'labels_sc':
elif signal == 'neural' or signal == 'labels' or signal == 'labels_sc' \
or signal == 'labels_masks':
dtype = 'float32'
with h5py.File(self.paths[signal], 'r', libver='latest', swmr=True) as f:
if idx is None:
Expand All @@ -286,25 +287,21 @@ def __getitem__(self, idx):
else:
sample[signal] = [f[signal][str('trial_%04i' % idx)][()].astype(dtype)]

elif signal == 'ae_latents':
elif signal == 'ae_latents' or signal == 'latents':
dtype = 'float32'
sample[signal] = self._try_to_load(
signal, key='latents', idx=idx, dtype=dtype)
sample[signal] = self._try_to_load(signal, key='latents', idx=idx, dtype=dtype)

elif signal == 'ae_predictions':
dtype = 'float32'
sample[signal] = self._try_to_load(
signal, key='predictions', idx=idx, dtype=dtype)
sample[signal] = self._try_to_load(signal, key='predictions', idx=idx, dtype=dtype)

elif signal == 'arhmm' or signal == 'arhmm_states':
dtype = 'int32'
sample[signal] = self._try_to_load(
signal, key='states', idx=idx, dtype=dtype)
sample[signal] = self._try_to_load(signal, key='states', idx=idx, dtype=dtype)

elif signal == 'arhmm_predictions':
dtype = 'float32'
sample[signal] = self._try_to_load(
signal, key='predictions', idx=idx, dtype=dtype)
sample[signal] = self._try_to_load(signal, key='predictions', idx=idx, dtype=dtype)

else:
raise ValueError('"%s" is an invalid signal type' % signal)
Expand Down Expand Up @@ -626,7 +623,7 @@ def next_batch(self, dtype):

if self.as_numpy:
for i, signal in enumerate(sample):
if signal is not 'batch_idx':
if signal != 'batch_idx':
sample[signal] = [ss.cpu().detach().numpy() for ss in sample[signal]]
else:
if self.device == 'cuda':
Expand Down
Loading