Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implement logger adapter to handle different loggers (TensorBoard and Wandb for now) #816

Open
wants to merge 15 commits into
base: main
Choose a base branch
from

Conversation

mikcnt
Copy link
Contributor

@mikcnt mikcnt commented Jan 2, 2022

Description

This PR resolves #89.

  • I created a base logger class (loggers.ForecastingLoggerBase) to handle different loggers (e.g., TensorBoard, Wandb) using the same syntax. This inherits from the PyTorch Lightning logger base class, and implements three new methods used across the PyTorch Forecasting project: add_figure, add_embeddings, and log_gradient_flow.
  • I implemented two new logger classes for TensorBoard and Wandb based on loggers.ForecastingLoggerBase, and inheriting from their PyTorch Lightning counterparts (respectively pytorch_lightning.loggers.TensorBoardLogger and pytorch_lightning.loggers.WandbLogger).
  • I modified the tests, replacing all the pytorch_lightning.loggers.TensorBoardLogger instances with the newly created loggers.ForecastingTensorBoardLogger.

As for the Wandb logger, loggers.ForecastingWandbLogger, I was wondering if someone could guide me on how to structure its tests.

Moreover, notice how the ForecastingWandbLogger's log_gradient_flow method is empty. This is due to the fact that Wandb automatically logs the distribution of gradients. It is sufficient to call logger.watch(model) upon initialization, as in the example below:

from pytorch_forecasting import TemporalFusionTransformer
from pytorch_forecasting.loggers import ForecastingWandbLogger

model = TemporalFusionTransformer(...)
logger = ForecastingWandbLogger(...)
logger.watch(model)

Finally, I noticed that the tests won't give all greens, even on the master branch, unless I don't move p.grad to the cpu inside the log_gradient_flow method of the BaseModel. Again, I empathize that this is still true in the master branch.

Checklist

  • Linked issues (if existing)
  • Amended changelog for large changes (and added myself there as contributor)
  • Added/modified tests
  • Used pre-commit hooks when committing to ensure that code is compliant with hooks. Install hooks with pre-commit install.
    To run hooks independent of commit, execute pre-commit run --all-files

@codecov-commenter
Copy link

codecov-commenter commented Jan 3, 2022

Codecov Report

Merging #816 (c6b98ec) into master (34bc7fc) will decrease coverage by 0.08%.
The diff coverage is 87.83%.

Impacted file tree graph

@@            Coverage Diff             @@
##           master     #816      +/-   ##
==========================================
- Coverage   89.05%   88.97%   -0.09%     
==========================================
  Files          24       28       +4     
  Lines        3829     3881      +52     
==========================================
+ Hits         3410     3453      +43     
- Misses        419      428       +9     
Flag Coverage Δ
cpu 88.97% <87.83%> (-0.09%) ⬇️
pytest 88.97% <87.83%> (-0.09%) ⬇️

Flags with carried forward coverage won't be shown. Click here to find out more.

Impacted Files Coverage Δ
...torch_forecasting/loggers/wandb_logger/__init__.py 66.66% <66.66%> (ø)
pytorch_forecasting/loggers/base_logger.py 78.57% <78.57%> (ø)
pytorch_forecasting/loggers/__init__.py 100.00% <100.00%> (ø)
...forecasting/loggers/tensorboard_logger/__init__.py 100.00% <100.00%> (ø)
pytorch_forecasting/models/base_model.py 87.89% <100.00%> (-0.23%) ⬇️
pytorch_forecasting/models/nbeats/__init__.py 93.57% <100.00%> (ø)
...ing/models/temporal_fusion_transformer/__init__.py 97.29% <100.00%> (-0.03%) ⬇️

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 34bc7fc...c6b98ec. Read the comment docs.

@jdb78
Copy link
Collaborator

jdb78 commented Jan 5, 2022

Think we might also have to amend the tutorial notebooks

@mikcnt
Copy link
Contributor Author

mikcnt commented Jan 5, 2022

Yep, definitely. I'll give them a look tonight.

Do you also have any tip on how to structure tests for the logger adapter?

@review-notebook-app
Copy link

Check out this pull request on  ReviewNB

See visual diffs & provide feedback on Jupyter Notebooks.


Powered by ReviewNB

@mikcnt
Copy link
Contributor Author

mikcnt commented Jan 5, 2022

I've updated the stallion notebook so that it uses the ForecastingTensorBoardLogger instead of the standard TensorBoardLogger.

@mikcnt
Copy link
Contributor Author

mikcnt commented Jan 22, 2022

Any news on this?

@B-Deforce
Copy link
Contributor

There actually turns out to be a nice workaround for this.
You can add wandb.init(project='my-project', sync_tensorboard=True) followed by your PyTorch code and it'll nicely log everything to Wandb without issues.

For more info see here.

@mikcnt
Copy link
Contributor Author

mikcnt commented May 26, 2022

Hey @B-Deforce! That's actually pretty nice, thanks for the heads up, I didn't know about that! So, you're suggesting to replace the ForecastingWandbLogger with the standard logger used in forecasting, and to add the wandb initialization with sync to tensorboard?

@B-Deforce
Copy link
Contributor

Yes @mikcnt, I was actually also struggling to get Wandb running with Pytorch Forecasting and used your implementation above (which worked great btw). But then I found this neat little feature of wandb! Yes, that's correct. Just using the standard logger (which is the TensorBoardLogger) with the wandb init + sync works great.

@mikcnt
Copy link
Contributor Author

mikcnt commented May 26, 2022

That's terrific! Actually, my original plan was to include a large variety of loggers, not just Tensorboard or Wandb (e.g., MLFlow), but, since it appears like this won't be merged anytime soon, I guess I would go with your solution too if I were in your position! 😆

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Expand example logging for multiple logging platforms
4 participants