Releases: FluxML/NNlib.jl
Releases · FluxML/NNlib.jl
v0.7.13
v0.7.12
v0.7.11
NNlib v0.7.11
Closed issues:
- ∇softmax modifies input argument (#264)
Merged pull requests:
- add bilinear upsampling (#262) (@CarloLucibello)
v0.7.10
NNlib v0.7.10
Closed issues:
- Integration with Batched.jl (#76)
- no method matching conv(::Array{Float32,4}, ::Array{Float32,4}; stride=(1, 1), pad=(0, 0), dilation=(1, 1)) (#106)
- Convolution for mixed-precision inputs (#107)
- Asymmetric padding fails on gpu models (#117)
- ambiguity in calling maxpool (#128)
- move adjoint definitions from Zygote (#219)
- softmax gradients inefficient, API change necessary (#248)
- @eval (#263)
Merged pull requests:
- port rule definitions to ChainRulesCore (#242) (@simeonschaub)
- new API grad softmax (#250) (@CarloLucibello)
- remove ZygoteRules (#257) (@CarloLucibello)
- Remove TODO about not defining broadcasted for ADs other than Zygote (#258) (@oxinabox)
- more extensive tests and some reorg (#259) (@CarloLucibello)
- add pixel_shuffle (#260) (@CarloLucibello)
- fix hardsigmoid and use float(x) instead of x/1 (#261) (@CarloLucibello)
v0.7.9
NNlib v0.7.9
Closed issues:
- softmax! and logsoftmax! implementations incomplete (#249)
Merged pull requests:
v0.7.8
NNlib v0.7.8
Closed issues:
- conv_bias_act! does not modify its output argument correctly (#243)
Merged pull requests:
- add logsumexp (#244) (@CarloLucibello)
- fix conv_act_bias (#245) (@CarloLucibello)
- github CI (#246) (@CarloLucibello)
v0.7.7
v0.7.6
v0.7.5
NNlib v0.7.5
Closed issues:
- Conv Fallback Warning Hindering Intentional Work (With FixedPointNumbers) (#232)
Merged pull requests:
v0.7.4
NNlib v0.7.4
Closed issues:
- Hardware is unsupported by NNPACK so falling back to default NNlib (#225)
Merged pull requests:
- Revert LoopVectorization (#226) (@DhairyaLGandhi)
- requires for nnpack (#227) (@CarloLucibello)
- Bump version (#230) (@DhairyaLGandhi)