v0.4.2
Major updates:
Calibrations changes because of RNN layers.
- The calibration data size will no longer cut to 100 automatically.
- It is no longer shuffle the data for the user.
- User must cut the calibration sample to the size (i.e. 100) and shuffle the data when needed.
- In quantisation with max-min, allow the very small number to be saturated, such as 1.00001
Support model with multiple outputs:
- currently, the output data buff naming are nnom_output_data[], nnom_output_data1[], nnom_output_data2[]...
Add RNNoise like Voice Enhancement example:
- With well documented and demo
Depthwise Conv layers are now supported depth_multiplier arguments.
- Simply use it in Keras.
Bugs fixed:
- Update which solved the issue conv2d 1*1 with strides!=1 and cmsis-nn #84.
- RNN not passing correct Q format to the next layer.
- Deleting model causes Segment Fault
- Compiler stuck at some points with multiple output model.
- DW conv and Conv cannot calculate the correct kernel range near border of the image with padding.
Minors:
- fixed hard sigmoid, fixed compiling warning of multiple outputs
- update the KWS example's MFCC C code to align with python's MFCC. Accuracy should improve a lot.
- Improve performance of local backends (DW Conv)