Releases: explosion/thinc
Releases · explosion/thinc
v8.0.14: New activation functions, bug fixes and more
✨ New features and improvements
- Add new activation functions:
ClippedLinear.v1
,Gelu.v1
,HardSigmoid.v1
,HardSwish.v1
,HardSwishMobilenet.v1
,HardTanh.v1
,ReluK.v1
, andSwish.v1
. - Automatically set the GPU allocator to PyTorch when PyTorch models are loaded through
PyTorchWrapper
on GPU to avoid memory contention between CuPy and PyTorch. - Support big endian platforms through
thinc-bigendian-ops
and consistently serialize model data with little endian byte order. - Add
Softmax.v2
with support for softmax with temperature and optional normalization. - Add
CategoricalCrossentropy.v3
andSequenceCategoricalCrossentropy.v3
with support for label smoothing. - Speed up
CupyOps.maxout
by exploiting GPU parallelism better. - Support sequence lengths in the
NumpyOps.seq2col
andCupyOps.seq2col
implementations ofOps.seq2col
to determine padding. - Improve performance of
Ragged
. - Support
Ragged
arrays inexpand_window.v1
.
🔴 Bug fixes
- Fix issue #552: Do not backpropagate
Inf
/NaN
out of PyTorch layers when using mixed-precision training. - Fix issue #578: Correctly cast the threshold argument of
CupyOps.mish
and correct an equation inOps.backprop_mish
. - Fix issue #587: Correct invariant checks in
CategoricalCrossentropy.get_grad
. - Fix issue #592: Update
murmurhash
requirement. - Fix issue #594: Do not sort positional arguments in
Config
.
⚠️ Backwards incompatibilities
- The
out
keyword argument ofOps.mish
andOps.backprop_mish
is replaced byinplace
for consistency with other activations.
📖Documentation and examples
- Update example Jupyter notebooks for the current Thinc version.
👥 Contributors
@adrianeboyd, @andrewsi-z, @danieldk, @honnibal, @ines, @Jette16, @kadarakos, @kianmeng, @polm, @svlandeg, @thatbudakguy
v8.0.12: Bug fixes for set_ops and use_ops
v8.0.11: Improved GPU training time
✨ New features and improvements
- Speed up GPU training time with up to ~25% by using cuBLAS for computing Frobenius norms in gradient clipping.
- Give preference to
AppleOps
(if available) when callingget_ops("cpu")
. - Support missing values in
CategoricalCrossEntropy
when the labels are integers. - Provide the option to run
model.walk
with depth-first traversal. - Wrap
forward
/init
callbacks of aModel
inwith_debug
andwith_nvtx_range
to facilitate recursively instrumenting models.
🔴 Bug fixes
- Fix issue #537: Fix
replace_node
on nodes with indirect node refs.
👥 Contributors
v8.0.10: Bug fix for get_array_ops
v8.0.9: Support for NVTX ranges and mypy plugin fixes
✨ New features and improvements
- Add
ops
registry. - Enable config overrides to add new keys.
- Allow newer releases of
nbconvert
andnbformat
. - Layer for marking NVTX ranges.
- Support mixed-precision training in the PyTorch shim (experimental).
🔴 Bug fixes
- Fix issue #521: Fix
numpy_ops
gemm
output. - Fix issue #525: Fix
mypy
plugin crash on variadic arguments.
👥 Contributors
@adrianeboyd, @connorbrinton, @danieldk, @honnibal, @ines, @svlandeg
v8.0.8: CategoricalCrossentropy allows negated values
✨ New features and improvements
- Allow negated values in CategoricalCrossentropy
v8.0.7: Bug fixes for n-grams and typing
v8.0.6: Bug fix for backprop_reduce_max GPU kernel
🔴 Bug fixes
- Fix
backprop_reduce_max
GPU kernel.
v8.0.5: Updates for torch v1.9.0
✨ New features and improvements
- Update to support torch v1.9.0.
v8.0.4: New tuplify and resizable layers, and some bug fixes
✨ New features and improvements
- Add
tuplify
layer. - More generic implementation of the
concatenate
layer. - Add
resizable
layer. - Introduce
force
parameter formodel.set_dim()
. - Improve UX when setting the GPU allocator.
🔴 Bug fixes
- Fix issue #492: Fix backpropagation in
with_getitem
. - Fix issue #494: Resolve forward refs issue with Pydantic.
- Fix issue #496: Avoid Pydantic versions with security vulnerabilities.
👥 Contributors
@adrianeboyd, @honnibal, @ines, @Kludex, @polm, @svlandeg, @thomashacker