Skip to content

Releases: facebookresearch/fairscale

v0.1.1

02 Dec 04:51
867cc2d
Compare
Choose a tag to compare

Fixed

  • make sure pip package includes header files (#221)

v0.1.0

01 Dec 22:18
1db8bbd
Compare
Choose a tag to compare

Added

  • ShardedDataParallel with autoreduce (#157)
  • cpu support for Pipe (#188)
  • ShardedOptim: Distributed Grad Scaler (for torch AMP) (#182)
  • OSS-aware clip grads, bridge sharded states (#167)
  • oss: add rank_local_state_dict staticmethod (#174)
  • support for PyTorch 1.7.0 (#171)
  • Add implementation of AdaScale (#139)

Fixed

v0.0.3

27 Oct 22:05
1e6c547
Compare
Choose a tag to compare

Added

  • multi-process pipe (#90)

Fixed

  • OSS+apex fix (#136)
  • MegaTron+OSS DDP fix (#121)

v0.0.2

27 Oct 21:59
4488e17
Compare
Choose a tag to compare

Added

  • add ddp that works with oss with reduce() not all_reduce() (#19)
  • support for PyTorch v1.6
  • add mixed precision Adam (#40)
  • Adam optimizer state scaling (#44)

Fixed

  • properly restore a sharded optim state (#39)
  • OSS restore state to proper device (#46)
  • optim/oss: support optimizers with additional step kwargs (#53)
  • optim/oss: fix state cast (#56)
  • fix eval for oss_ddp (#55)
  • optim/oss: work correctly with LRScheduler (#58)

v0.0.1

31 Jul 23:08
291da1e
Compare
Choose a tag to compare
  • Initial release.