You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
According to scikit-learn documentation roc_auc_score function takes target probability scores from estimator.predict_proba(X, y)[:, 1]. However, in Supervised.pyroc_auc_score takes binary predictions. This changes the output from roc_auc_score. Is there a specific reason for this, or is it a bug?
In Supervised.py y_pred = pipe.predict(X_test)
... roc_auc = roc_auc_score(y_test, y_pred)
According to scikit-learn documentation
roc_auc_score
function takes target probability scores fromestimator.predict_proba(X, y)[:, 1]
. However, inSupervised.py
roc_auc_score
takes binary predictions. This changes the output fromroc_auc_score
. Is there a specific reason for this, or is it a bug?In
Supervised.py
y_pred = pipe.predict(X_test)
...
roc_auc = roc_auc_score(y_test, y_pred)
https://scikit-learn.org/stable/modules/generated/sklearn.metrics.roc_auc_score.html#sklearn.metrics.roc_auc_score
The text was updated successfully, but these errors were encountered: