Skip to content

Updating Hyperparameters

Josh Kruse edited this page May 7, 2020 · 1 revision

In the second cell of each notebook you will find a dictionary titled hyperparameter_defaults. This features some of the most important hyperparameters. (There are more hyperparameters that you can edit in the layer structure section of the notebook but these are the most important.)

  • max_len : Sets the length of each training sound clip’s MFCC output 30 is the value used for 1 second training files

  • buckets : Sets the amount of MFCC’s to return. Smaller values lowers the complexity of the training samples

  • epochs : Sets the amount of full runs through the training set to complete When retraining the model set this to a higher value (>20). Then find which epoch returns the smallest validation loss value. Then run the model again with this amount of epochs.

  • batch_size : Amount of training samples to consider at each epoch

  • layer_one - layer_four : Hidden unit size at each Neural Network layer

  • dropout_one, dropout_two : Randomly removes this percentage of training samples at this point in the layer structure

  • sampler : Type of sampler to use. Useful when classes are unbalanced

    • none : no sampler will be chose
    • over : randomly selects training samples from minority classes and adds them to the list again, until this amount reaches the size of the largest class.
    • under : randomly selects training samples from majority classes and removes them until this amount reaches the size of the smallest class
    • smote : a random example from the minority class is chosen. The nearest neighbors for that example are found. A randomly selected neighbor is chosen and a synthetic example is created at a randomly selected point between the two examples in feature space.
Clone this wiki locally