You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
CNN_SpeechEmotion.ipynb works in colab but I can't run it with python (with pyCharm) that I installed locally.
Why is this?
The result I got on Colab:
11/11 [==============================] - 0s 6ms / step - loss: 0.4264 - accuracy: 0.8207
Restored model, accuracy: 82.07%
I have a few more questions about the code
I will be very glad if you help
Although 4 classes were used in the project ("angry", "sad", "neutral", "happy"), 8 neurons were used in the dense layer.
Shouldn't dense layer have the same number of neurons as the number of classes?
Some elements of x_traincnn used in training the model are 0 (especially after 150).
In a question asked before, it was said that instead of 0s, it was necessary to use averages.
Is it relative to the average row or column? (time / attribute)
3)
Can you share the X_train, X_test, y_train, y_test data that you have prepared for training and testing the model? I will compare it to my data
CNN_SpeechEmotion.ipynb works in colab but I can't run it with python (with pyCharm) that I installed locally.
Why is this?
The result I got on Colab:
11/11 [==============================] - 0s 6ms / step - loss: 0.4264 - accuracy: 0.8207
Restored model, accuracy: 82.07%
I have a few more questions about the code
I will be very glad if you help
Shouldn't dense layer have the same number of neurons as the number of classes?
In a question asked before, it was said that instead of 0s, it was necessary to use averages.
Is it relative to the average row or column? (time / attribute)
3)
Can you share the X_train, X_test, y_train, y_test data that you have prepared for training and testing the model? I will compare it to my data
np.save ('X_train', X_train)
np.save ('X_test', X_test)
np.save ('y_train', y_train)
np.save ('y_test', y_test)
The text was updated successfully, but these errors were encountered: