目錄
1.sklearn.metrics.f1_score
2.sklearn.metrics.precision_score
3.sklearn.metrics.recall_score
4.Accuracy,、Precision、?Recall和F1-score公式???????
5.TP、FP、TN、FN的概念
sklearn.metrics.f1_score官網(wǎng)鏈接sklearn.metrics.f1_score — scikit-learn 1.0.2 documentation
sklearn.metrics.f1_score(y_true,?y_pred,?*,?labels=None,?pos_label=1,?
average='binary',?sample_weight=None,?zero_division='warn')
重要參數(shù)說明:
y_true:一維數(shù)組,或標簽指示數(shù)組/稀疏矩陣 (真實值)
y_pred:一維數(shù)組,或標簽指示數(shù)組/稀疏矩陣 (預測值)
pos_label:str or int, default=1
? ? ? ? ? ? ? ? ? 報告是否average='binary'且數(shù)據(jù)為binary的類。如果數(shù)據(jù)是多類或多標簽的,這將? ? ? ? ? ? ? ? ? ? ? ? ? 被忽略;設(shè)置labels=[pos_label]和average != 'binary'將只報告該標簽的分數(shù)。
average:{‘micro’, ‘macro’, ‘samples’,’weighted’, ‘binary’} or None, default=’binary’
????????????????多類/多標簽目標時需要此參數(shù)。如果為None,則返回每個類的分數(shù)。否則,這決定了對數(shù)據(jù)進行平均的類型:
????????“binary”: 只報告由pos_label指定的類的結(jié)果。這只適用于目標(y_{true,pred})是二進制的情況。
????????“micro”: 通過計算總真陽性、假陰性和假陽性來全局計算指標。
????????“macro”: 計算每個標簽的度量,并找到它們的未加權(quán)平均值。這還沒有考慮到標簽的不平衡。
? ? ? ? ?“weighted”:? 計算每個標簽的指標,并根據(jù)支持找到它們的平均權(quán)重(每個標簽的真實實例數(shù))。這改變了“宏觀”的標簽不平衡;它會導致一個不介于準確率和召回率之間的f值。
? ? ? ? ?“samples”:? 為每個實例計算指標,并找到它們的平均值(僅對與accuracy_score不同的多標簽分類有意義)。
sample_weight:array-like of shape (n_samples,), default=None
? ? ? ? ?? 樣本的權(quán)重
zero_division:“warn”, 0 or 1, default=”warn”
????????????????設(shè)置除法為零時返回的值,即所有預測和標簽為負數(shù)時返回。如果設(shè)置為" warn ",這將充當0,但也會引發(fā)警告。
返回值:
f1_score:float or array of float, shape = [n_unique_labels]
二分類中正類的F1分,
或多類任務(wù)中,每個類的F1分的加權(quán)平均。
示例:
from sklearn.metrics import f1_score
y_true = [0, 1, 1, 1, 2, 2]
y_pred = [0, 1, 1, 2, 1, 2]
macro_f1 = f1_score(y_true, y_pred, average='macro')
micro_f1 = f1_score(y_true, y_pred, average='micro')
weighted_f1= f1_score(y_true, y_pred, average='weighted')
None_f1 = f1_score(y_true, y_pred, average=None)
print('macro_f1:',macro_f1,'\nmicro_f1:',micro_f1,'\nweighted_f1:',
weighted_f1,'\nNone_f1:',None_f1)
輸出結(jié)果:
macro_f1: 0.7222222222222222
micro_f1: 0.6666666666666666
weighted_f1: 0.6666666666666666
None_f1: [1. 0.66666667 0.5 ]
sklearn.metrics.precision_score官網(wǎng)鏈接
sklearn.metrics.precision_score — scikit-learn 1.1.1 documentation
sklearn.metrics.precision_score(y_true, y_pred, *, labels=None, pos_label=1,
average='binary', sample_weight=None, zero_division='warn')
重要參數(shù)意義與f1-score類似
代碼實例:
>>> from sklearn.metrics import precision_score
>>> y_true = [0, 1, 2, 0, 1, 2]
>>> y_pred = [0, 2, 1, 0, 0, 1]
>>> precision_score(y_true, y_pred, average='macro')
0.22...
>>> precision_score(y_true, y_pred, average='micro')
0.33...
>>> precision_score(y_true, y_pred, average='weighted')
0.22...
>>> precision_score(y_true, y_pred, average=None)
array([0.66..., 0. , 0. ])
>>> y_pred = [0, 0, 0, 0, 0, 0]
>>> precision_score(y_true, y_pred, average=None)
array([0.33..., 0. , 0. ])
>>> precision_score(y_true, y_pred, average=None, zero_division=1)
array([0.33..., 1. , 1. ])
>>> # multilabel classification
>>> y_true = [[0, 0, 0], [1, 1, 1], [0, 1, 1]]
>>> y_pred = [[0, 0, 0], [1, 1, 1], [1, 1, 0]]
>>> precision_score(y_true, y_pred, average=None)
array([0.5, 1. , 1. ])
sklearn.metrics.recall_score官網(wǎng)鏈接?
sklearn.metrics.recall_score — scikit-learn 1.1.1 documentation
sklearn.metrics.recall_score(y_true, y_pred, *, labels=None, pos_label=1,
average='binary',sample_weight=None, zero_division='warn')
?重要參數(shù)意義與f1-score類似
代碼實例:
>>> from sklearn.metrics import recall_score
>>> y_true = [0, 1, 2, 0, 1, 2]
>>> y_pred = [0, 2, 1, 0, 0, 1]
>>> recall_score(y_true, y_pred, average='macro')
0.33...
>>> recall_score(y_true, y_pred, average='micro')
0.33...
>>> recall_score(y_true, y_pred, average='weighted')
0.33...
>>> recall_score(y_true, y_pred, average=None)
array([1., 0., 0.])
>>> y_true = [0, 0, 0, 0, 0, 0]
>>> recall_score(y_true, y_pred, average=None)
array([0.5, 0. , 0. ])
>>> recall_score(y_true, y_pred, average=None, zero_division=1)
array([0.5, 1. , 1. ])
>>> # multilabel classification
>>> y_true = [[0, 0, 0], [1, 1, 1], [0, 1, 1]]
>>> y_pred = [[0, 0, 0], [1, 1, 1], [1, 1, 0]]
>>> recall_score(y_true, y_pred, average=None)
array([1. , 1. , 0.5])
Accuracy、Precision、Recall和F1-score公式:
?
?
?
?
TP、FP、TN、FN的概念:
TP(True?Positive):預測為正,預測結(jié)果是正確的
FP(False?Positive):預測為正,預測結(jié)果是錯誤的
TN(True?Negative):預測為負,預測結(jié)果是正確的
FN(False?Negative):預測為負,預測結(jié)果是錯誤的文章來源:http://www.zghlxwxcb.cn/news/detail-424100.html
?文章來源地址http://www.zghlxwxcb.cn/news/detail-424100.html
到了這里,關(guān)于python 中,sklearn包下的f1_score、precision、recall使用方法,Accuracy、Precision、Recall和F1-score公式,TP、FP、TN、FN的概念的文章就介紹完了。如果您還想了解更多內(nèi)容,請在右上角搜索TOY模板網(wǎng)以前的文章或繼續(xù)瀏覽下面的相關(guān)文章,希望大家以后多多支持TOY模板網(wǎng)!