sklearn.metrics.classification_report
2017-07-10 22:35
337 查看
分类报告:sklearn.metrics.classification_report(y_true, y_pred, labels=None, target_names=None,sample_weight=None, digits=2),显示主要的分类指标,返回每个类标签的精确、召回率及F1值
主要参数说明:
labels:分类报告中显示的类标签的索引列表
target_names:显示与labels对应的名称
digits:指定输出格式的精确度
精度(precision) = 正确预测的个数(TP)/被预测正确的个数(TP+FP)
召回率(recall)=正确预测的个数(TP)/预测个数(TP+FN)
F1 = 2*精度*召回率/(精度+召回率)
In [4]: from sklearn.metrics import classification_report
...: y_true = [1, 2, 3, 3, 3]
...: y_pred = [1, 1, 3, 3, 2]
...: labels =[1,3,2]
...: target_names = ['labels_1','labels_2','labels_3','labels-4']
...: print(classification_report(y_true,y_pred,labels=labels,target_names= t
...: arget_names,digits=3))
...:
precision recall f1-score support
labels_1 0.500 1.000 0.667 1
labels_2 1.000 0.667 0.800 3
labels_3 0.000 0.000 0.000 1
avg / total 0.700 0.600 0.613 5最后一行结果:等于各指标的加权平均值
注意:在二分类中,真正例率也称灵敏度,真负例率也称特效性
主要参数说明:
labels:分类报告中显示的类标签的索引列表
target_names:显示与labels对应的名称
digits:指定输出格式的精确度
精度(precision) = 正确预测的个数(TP)/被预测正确的个数(TP+FP)
召回率(recall)=正确预测的个数(TP)/预测个数(TP+FN)
F1 = 2*精度*召回率/(精度+召回率)
In [4]: from sklearn.metrics import classification_report
...: y_true = [1, 2, 3, 3, 3]
...: y_pred = [1, 1, 3, 3, 2]
...: labels =[1,3,2]
...: target_names = ['labels_1','labels_2','labels_3','labels-4']
...: print(classification_report(y_true,y_pred,labels=labels,target_names= t
...: arget_names,digits=3))
...:
precision recall f1-score support
labels_1 0.500 1.000 0.667 1
labels_2 1.000 0.667 0.800 3
labels_3 0.000 0.000 0.000 1
avg / total 0.700 0.600 0.613 5最后一行结果:等于各指标的加权平均值
注意:在二分类中,真正例率也称灵敏度,真负例率也称特效性
相关文章推荐
- sklearn classification_report 输出说明
- 理解metrics.classification_report
- 理解metrics.classification_report
- kaggle 各种评价指标之二 :Error Metrics for Classification Problems 分类问题错误度量
- sklearn.metrics中的评估方法(accuracy_score,recall_score,roc_curve,roc_auc_score,confusion_matrix)
- 27-如何度量分类算法的性能好坏(Scoring metrics for classification)
- sklearn.metrics import precision_recall_fscore_support
- What Agile Metrics Should We Report?
- sklearn.metrics.roc_curve用法
- python sklearn.metrics roc_curve
- django+mako:Expected: \|,} in file 'D:/metrics/metrics/templates/report_event.html' at line: 59 char: 53
- sklearn.metrics.precision_score 中 unknow is not supported 问题
- 【机器学习】sklearn.metrics绩效指标实例
- sequential模型编译时的指标设置:sklearn.metrics:指标
- Sklearn - metrics 源码阅读
- sklearn.metrics.roc_curve()函数绘制ROC曲线
- sklearn.metrics.confusion_matrix
- 修改tflearn包中classification_report精确度位数
- Reading Report of ImageNet Classification with Deep CNNs
- Metrics.NET report to Zabbix