人工神經(jīng)網(wǎng)絡(luò)
感知機
1.感知機是根據(jù)輸入實例的特征向量 x x x對其進行二類分類的線性分類模型:
f ( x ) = sign ? ( w ? x + b ) f(x)=\operatorname{sign}(w \cdot x+b) f(x)=sign(w?x+b)
感知機模型對應(yīng)于輸入空間(特征空間)中的分離超平面 w ? x + b = 0 w \cdot x+b=0 w?x+b=0。
2.感知機學(xué)習(xí)的策略是極小化損失函數(shù):
min ? w , b L ( w , b ) = ? ∑ x i ∈ M y i ( w ? x i + b ) \min _{w, b} L(w, b)=-\sum_{x_{i} \in M} y_{i}\left(w \cdot x_{i}+b\right) w,bmin?L(w,b)=?xi?∈M∑?yi?(w?xi?+b)
損失函數(shù)對應(yīng)于誤分類點到分離超平面的總距離。
3.感知機學(xué)習(xí)算法是基于隨機梯度下降法的對損失函數(shù)的最優(yōu)化算法,有原始形式和對偶形式。算法簡單且易于實現(xiàn)。原始形式中,首先任意選取一個超平面,然后用梯度下降法不斷極小化目標(biāo)函數(shù)。在這個過程中一次隨機選取一個誤分類點使其梯度下降。
4.當(dāng)訓(xùn)練數(shù)據(jù)集線性可分時,感知機學(xué)習(xí)算法是收斂的。感知機算法在訓(xùn)練數(shù)據(jù)集上的誤分類次數(shù) k k k滿足不等式:
k ? ( R γ ) 2 k \leqslant\left(\frac{R}{\gamma}\right)^{2} k?(γR?)2
當(dāng)訓(xùn)練數(shù)據(jù)集線性可分時,感知機學(xué)習(xí)算法存在無窮多個解,其解由于不同的初值或不同的迭代順序而可能有所不同。
二分類模型
f ( x ) = s i g n ( w ? x + b ) f(x) = sign(w\cdot x + b) f(x)=sign(w?x+b)
sign ? ( x ) = { + 1 , x ? 0 ? 1 , x < 0 \operatorname{sign}(x)=\left\{\begin{array}{ll}{+1,} & {x \geqslant 0} \\ {-1,} & {x<0}\end{array}\right. sign(x)={+1,?1,?x?0x<0?
給定訓(xùn)練集:
T = { ( x 1 , y 1 ) , ( x 2 , y 2 ) , ? ? , ( x N , y N ) } T=\left\{\left(x_{1}, y_{1}\right),\left(x_{2}, y_{2}\right), \cdots,\left(x_{N}, y_{N}\right)\right\} T={(x1?,y1?),(x2?,y2?),?,(xN?,yN?)}
定義感知機的損失函數(shù)
L ( w , b ) = ? ∑ x i ∈ M y i ( w ? x i + b ) L(w, b)=-\sum_{x_{i} \in M} y_{i}\left(w \cdot x_{i}+b\right) L(w,b)=?∑xi?∈M?yi?(w?xi?+b)
算法
隨即梯度下降法 Stochastic Gradient Descent
隨機抽取一個誤分類點使其梯度下降。
w = w + η y i x i w = w + \eta y_{i}x_{i} w=w+ηyi?xi?
b = b + η y i b = b + \eta y_{i} b=b+ηyi?
當(dāng)實例點被誤分類,即位于分離超平面的錯誤側(cè),則調(diào)整 w w w, b b b的值,使分離超平面向該無分類點的一側(cè)移動,直至誤分類點被正確分類
拿出iris數(shù)據(jù)集中兩個分類的數(shù)據(jù)和[sepal length,sepal width]作為特征
1. 基于手寫代碼的感知器模型
1.1 數(shù)據(jù)讀取
import pandas as pd
import numpy as np
from sklearn.datasets import load_iris
import matplotlib.pyplot as plt
%matplotlib inline
# load data
iris = load_iris()
iris
{'data': array([[5.1, 3.5, 1.4, 0.2],
[4.9, 3. , 1.4, 0.2],
[4.7, 3.2, 1.3, 0.2],
[4.6, 3.1, 1.5, 0.2],
[5. , 3.6, 1.4, 0.2],
[5.4, 3.9, 1.7, 0.4],
[4.6, 3.4, 1.4, 0.3],
[5. , 3.4, 1.5, 0.2],
[4.4, 2.9, 1.4, 0.2],
[4.9, 3.1, 1.5, 0.1],
[5.4, 3.7, 1.5, 0.2],
[4.8, 3.4, 1.6, 0.2],
[4.8, 3. , 1.4, 0.1],
[4.3, 3. , 1.1, 0.1],
[5.8, 4. , 1.2, 0.2],
[5.7, 4.4, 1.5, 0.4],
[5.4, 3.9, 1.3, 0.4],
[5.1, 3.5, 1.4, 0.3],
[5.7, 3.8, 1.7, 0.3],
[5.1, 3.8, 1.5, 0.3],
[5.4, 3.4, 1.7, 0.2],
[5.1, 3.7, 1.5, 0.4],
[4.6, 3.6, 1. , 0.2],
[5.1, 3.3, 1.7, 0.5],
[4.8, 3.4, 1.9, 0.2],
[5. , 3. , 1.6, 0.2],
[5. , 3.4, 1.6, 0.4],
[5.2, 3.5, 1.5, 0.2],
[5.2, 3.4, 1.4, 0.2],
[4.7, 3.2, 1.6, 0.2],
[4.8, 3.1, 1.6, 0.2],
[5.4, 3.4, 1.5, 0.4],
[5.2, 4.1, 1.5, 0.1],
[5.5, 4.2, 1.4, 0.2],
[4.9, 3.1, 1.5, 0.2],
[5. , 3.2, 1.2, 0.2],
[5.5, 3.5, 1.3, 0.2],
[4.9, 3.6, 1.4, 0.1],
[4.4, 3. , 1.3, 0.2],
[5.1, 3.4, 1.5, 0.2],
[5. , 3.5, 1.3, 0.3],
[4.5, 2.3, 1.3, 0.3],
[4.4, 3.2, 1.3, 0.2],
[5. , 3.5, 1.6, 0.6],
[5.1, 3.8, 1.9, 0.4],
[4.8, 3. , 1.4, 0.3],
[5.1, 3.8, 1.6, 0.2],
[4.6, 3.2, 1.4, 0.2],
[5.3, 3.7, 1.5, 0.2],
[5. , 3.3, 1.4, 0.2],
[7. , 3.2, 4.7, 1.4],
[6.4, 3.2, 4.5, 1.5],
[6.9, 3.1, 4.9, 1.5],
[5.5, 2.3, 4. , 1.3],
[6.5, 2.8, 4.6, 1.5],
[5.7, 2.8, 4.5, 1.3],
[6.3, 3.3, 4.7, 1.6],
[4.9, 2.4, 3.3, 1. ],
[6.6, 2.9, 4.6, 1.3],
[5.2, 2.7, 3.9, 1.4],
[5. , 2. , 3.5, 1. ],
[5.9, 3. , 4.2, 1.5],
[6. , 2.2, 4. , 1. ],
[6.1, 2.9, 4.7, 1.4],
[5.6, 2.9, 3.6, 1.3],
[6.7, 3.1, 4.4, 1.4],
[5.6, 3. , 4.5, 1.5],
[5.8, 2.7, 4.1, 1. ],
[6.2, 2.2, 4.5, 1.5],
[5.6, 2.5, 3.9, 1.1],
[5.9, 3.2, 4.8, 1.8],
[6.1, 2.8, 4. , 1.3],
[6.3, 2.5, 4.9, 1.5],
[6.1, 2.8, 4.7, 1.2],
[6.4, 2.9, 4.3, 1.3],
[6.6, 3. , 4.4, 1.4],
[6.8, 2.8, 4.8, 1.4],
[6.7, 3. , 5. , 1.7],
[6. , 2.9, 4.5, 1.5],
[5.7, 2.6, 3.5, 1. ],
[5.5, 2.4, 3.8, 1.1],
[5.5, 2.4, 3.7, 1. ],
[5.8, 2.7, 3.9, 1.2],
[6. , 2.7, 5.1, 1.6],
[5.4, 3. , 4.5, 1.5],
[6. , 3.4, 4.5, 1.6],
[6.7, 3.1, 4.7, 1.5],
[6.3, 2.3, 4.4, 1.3],
[5.6, 3. , 4.1, 1.3],
[5.5, 2.5, 4. , 1.3],
[5.5, 2.6, 4.4, 1.2],
[6.1, 3. , 4.6, 1.4],
[5.8, 2.6, 4. , 1.2],
[5. , 2.3, 3.3, 1. ],
[5.6, 2.7, 4.2, 1.3],
[5.7, 3. , 4.2, 1.2],
[5.7, 2.9, 4.2, 1.3],
[6.2, 2.9, 4.3, 1.3],
[5.1, 2.5, 3. , 1.1],
[5.7, 2.8, 4.1, 1.3],
[6.3, 3.3, 6. , 2.5],
[5.8, 2.7, 5.1, 1.9],
[7.1, 3. , 5.9, 2.1],
[6.3, 2.9, 5.6, 1.8],
[6.5, 3. , 5.8, 2.2],
[7.6, 3. , 6.6, 2.1],
[4.9, 2.5, 4.5, 1.7],
[7.3, 2.9, 6.3, 1.8],
[6.7, 2.5, 5.8, 1.8],
[7.2, 3.6, 6.1, 2.5],
[6.5, 3.2, 5.1, 2. ],
[6.4, 2.7, 5.3, 1.9],
[6.8, 3. , 5.5, 2.1],
[5.7, 2.5, 5. , 2. ],
[5.8, 2.8, 5.1, 2.4],
[6.4, 3.2, 5.3, 2.3],
[6.5, 3. , 5.5, 1.8],
[7.7, 3.8, 6.7, 2.2],
[7.7, 2.6, 6.9, 2.3],
[6. , 2.2, 5. , 1.5],
[6.9, 3.2, 5.7, 2.3],
[5.6, 2.8, 4.9, 2. ],
[7.7, 2.8, 6.7, 2. ],
[6.3, 2.7, 4.9, 1.8],
[6.7, 3.3, 5.7, 2.1],
[7.2, 3.2, 6. , 1.8],
[6.2, 2.8, 4.8, 1.8],
[6.1, 3. , 4.9, 1.8],
[6.4, 2.8, 5.6, 2.1],
[7.2, 3. , 5.8, 1.6],
[7.4, 2.8, 6.1, 1.9],
[7.9, 3.8, 6.4, 2. ],
[6.4, 2.8, 5.6, 2.2],
[6.3, 2.8, 5.1, 1.5],
[6.1, 2.6, 5.6, 1.4],
[7.7, 3. , 6.1, 2.3],
[6.3, 3.4, 5.6, 2.4],
[6.4, 3.1, 5.5, 1.8],
[6. , 3. , 4.8, 1.8],
[6.9, 3.1, 5.4, 2.1],
[6.7, 3.1, 5.6, 2.4],
[6.9, 3.1, 5.1, 2.3],
[5.8, 2.7, 5.1, 1.9],
[6.8, 3.2, 5.9, 2.3],
[6.7, 3.3, 5.7, 2.5],
[6.7, 3. , 5.2, 2.3],
[6.3, 2.5, 5. , 1.9],
[6.5, 3. , 5.2, 2. ],
[6.2, 3.4, 5.4, 2.3],
[5.9, 3. , 5.1, 1.8]]),
'target': array([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2]),
'frame': None,
'target_names': array(['setosa', 'versicolor', 'virginica'], dtype='<U10'),
'DESCR': '.. _iris_dataset:\n\nIris plants dataset\n--------------------\n\n**Data Set Characteristics:**\n\n :Number of Instances: 150 (50 in each of three classes)\n :Number of Attributes: 4 numeric, predictive attributes and the class\n :Attribute Information:\n - sepal length in cm\n - sepal width in cm\n - petal length in cm\n - petal width in cm\n - class:\n - Iris-Setosa\n - Iris-Versicolour\n - Iris-Virginica\n \n :Summary Statistics:\n\n ============== ==== ==== ======= ===== ====================\n Min Max Mean SD Class Correlation\n ============== ==== ==== ======= ===== ====================\n sepal length: 4.3 7.9 5.84 0.83 0.7826\n sepal width: 2.0 4.4 3.05 0.43 -0.4194\n petal length: 1.0 6.9 3.76 1.76 0.9490 (high!)\n petal width: 0.1 2.5 1.20 0.76 0.9565 (high!)\n ============== ==== ==== ======= ===== ====================\n\n :Missing Attribute Values: None\n :Class Distribution: 33.3% for each of 3 classes.\n :Creator: R.A. Fisher\n :Donor: Michael Marshall (MARSHALL%PLU@io.arc.nasa.gov)\n :Date: July, 1988\n\nThe famous Iris database, first used by Sir R.A. Fisher. The dataset is taken\nfrom Fisher\'s paper. Note that it\'s the same as in R, but not as in the UCI\nMachine Learning Repository, which has two wrong data points.\n\nThis is perhaps the best known database to be found in the\npattern recognition literature. Fisher\'s paper is a classic in the field and\nis referenced frequently to this day. (See Duda & Hart, for example.) The\ndata set contains 3 classes of 50 instances each, where each class refers to a\ntype of iris plant. One class is linearly separable from the other 2; the\nlatter are NOT linearly separable from each other.\n\n.. topic:: References\n\n - Fisher, R.A. "The use of multiple measurements in taxonomic problems"\n Annual Eugenics, 7, Part II, 179-188 (1936); also in "Contributions to\n Mathematical Statistics" (John Wiley, NY, 1950).\n - Duda, R.O., & Hart, P.E. (1973) Pattern Classification and Scene Analysis.\n (Q327.D83) John Wiley & Sons. ISBN 0-471-22361-1. See page 218.\n - Dasarathy, B.V. (1980) "Nosing Around the Neighborhood: A New System\n Structure and Classification Rule for Recognition in Partially Exposed\n Environments". IEEE Transactions on Pattern Analysis and Machine\n Intelligence, Vol. PAMI-2, No. 1, 67-71.\n - Gates, G.W. (1972) "The Reduced Nearest Neighbor Rule". IEEE Transactions\n on Information Theory, May 1972, 431-433.\n - See also: 1988 MLC Proceedings, 54-64. Cheeseman et al"s AUTOCLASS II\n conceptual clustering system finds 3 classes in the data.\n - Many, many more ...',
'feature_names': ['sepal length (cm)',
'sepal width (cm)',
'petal length (cm)',
'petal width (cm)'],
'filename': 'iris.csv',
'data_module': 'sklearn.datasets.data'}
# load data
iris = load_iris()
df = pd.DataFrame(iris.data, columns=iris.feature_names)
df['label'] = iris.target
df.head()
sepal length (cm) | sepal width (cm) | petal length (cm) | petal width (cm) | label | |
---|---|---|---|---|---|
0 | 5.1 | 3.5 | 1.4 | 0.2 | 0 |
1 | 4.9 | 3.0 | 1.4 | 0.2 | 0 |
2 | 4.7 | 3.2 | 1.3 | 0.2 | 0 |
3 | 4.6 | 3.1 | 1.5 | 0.2 | 0 |
4 | 5.0 | 3.6 | 1.4 | 0.2 | 0 |
df.columns=["sepal length","sepal width","petal length","petal width","label"]
#查看標(biāo)簽元素列的元素種類和個數(shù)
df["label"].value_counts()
0 50
1 50
2 50
Name: label, dtype: int64
plt.scatter(df[:50]['sepal length'], df[:50]['sepal width'], label='0')
plt.scatter(df[50:100]['sepal length'], df[50:100]['sepal width'], label='1')
plt.xlabel('sepal length')
plt.ylabel('sepal width')
plt.legend()
<matplotlib.legend.Legend at 0x215d7f87f40>
data = np.array(df.iloc[:100, [0, 1, -1]])
X, y = data[:,:-1], data[:,-1]
data[:,-1]
array([0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1.,
1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.,
1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.,
1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.])
y = np.array([1 if i == 1 else -1 for i in y])
y
array([-1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1,
-1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1,
-1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1])
X[:5],y[:5]
(array([[5.1, 3.5],
[4.9, 3. ],
[4.7, 3.2],
[4.6, 3.1],
[5. , 3.6]]),
array([-1, -1, -1, -1, -1]))
w = w + η y i x i w = w + \eta y_{i}x_{i} w=w+ηyi?xi?
b = b + η y i b = b + \eta y_{i} b=b+ηyi?
1.2 構(gòu)建感知器模型
y.shape
(100,)
class Perception_model:
def __init__(self,n):
self.w=np.zeros(n,dtype=np.float32)
self.b=0
self.l_rate=0.1
def sign(self,x):
y=np.dot(x,self.w)+self.b
return y
def fit(self,X_train,y_train):
is_wrong=True
while is_wrong:
is_wrong=False
for i in range(len(X_train)):
if y_train[i]*self.sign(X_train[i])<=0:
self.w=self.w+self.l_rate*np.dot(y_train[i],X_train[i])
self.b=self.b+self.l_rate*y_train[i]
is_wrong=True
1.3 實例化模型并訓(xùn)練模型
model=Perception_model(X.shape[1])
model.fit(X,y)
1.4 可視化
np.max(X[:,0]),np.min(X[:,0])
(7.0, 4.3)
X_fig=np.arange(int(np.min(X[:,0])),int(np.max(X[:,0])+1),0.5)
X_fig
#w[0]*x1+w[1]*x2+b=0
array([4. , 4.5, 5. , 5.5, 6. , 6.5, 7. , 7.5])
y1=-(model.w[0]*X_fig+model.b)/model.w[1]
plt.plot(X_fig,y1,"r-+")
plt.scatter(X[:50,0],X[:50,1],label=0)
plt.scatter(X[50:100,0],X[50:100,1],label=1)
plt.show()
2. 基于sklearn的感知器實現(xiàn)
2.1 數(shù)據(jù)獲取與前面相同
2.2 導(dǎo)入類庫
from sklearn.linear_model import Perceptron
2.3 實例化感知器
model=Perceptron(fit_intercept=True,max_iter=1000,shuffle=True)
2.4 采用數(shù)據(jù)擬合感知器
model.fit(X,y)
Perceptron()
model.coef_
array([[ 23.2, -38.7]])
model.intercept_
array([-5.])
2.5 可視化
# 畫布大小
plt.figure(figsize=(6,4))
# 中文標(biāo)題
plt.rcParams['font.sans-serif'] = ['SimHei']
plt.rcParams['axes.unicode_minus'] = False
plt.title('鳶尾花線性數(shù)據(jù)示例')
X_fig=np.arange(int(np.min(X[:,0])),int(np.max(X[:,0])+1),0.5)
X_fig
y1=-(model.coef_[0][0]*X_fig+model.intercept_)/model.coef_[0][1]
plt.plot(X_fig,y1,"r-+")
plt.scatter(X[:50,0],X[:50,1],label=0)
plt.scatter(X[50:100,0],X[50:100,1],label=1)
plt.legend() # 顯示圖例
plt.grid(False) # 不顯示網(wǎng)格
plt.xlabel('sepal length')
plt.ylabel('sepal width')
plt.legend()
plt.show()
注意 !
在上圖中,有一個位于左下角的藍(lán)點沒有被正確分類,這是因為 SKlearn 的 Perceptron 實例中有一個tol
參數(shù)。
tol
參數(shù)規(guī)定了如果本次迭代的損失和上次迭代的損失之差小于一個特定值時,停止迭代。所以我們需要設(shè)置 tol=None
使之可以繼續(xù)迭代:
model=Perceptron(fit_intercept=True,max_iter=1000,shuffle=True,tol=None)
model.fit(X,y)
Perceptron(tol=None)
# 畫布大小
plt.figure(figsize=(6,4))
# 中文標(biāo)題
plt.rcParams['font.sans-serif'] = ['SimHei']
plt.rcParams['axes.unicode_minus'] = False
plt.title('鳶尾花線性數(shù)據(jù)示例')
X_fig=np.arange(int(np.min(X[:,0])),int(np.max(X[:,0])+1),0.5)
X_fig
y1=-(model.coef_[0][0]*X_fig+model.intercept_)/model.coef_[0][1]
plt.plot(X_fig,y1,"r-+")
plt.scatter(X[:50,0],X[:50,1],label=0)
plt.scatter(X[50:100,0],X[50:100,1],label=1)
plt.legend() # 顯示圖例
plt.grid(False) # 不顯示網(wǎng)格
plt.xlabel('sepal length')
plt.ylabel('sepal width')
plt.legend()
plt.show()
現(xiàn)在可以看到,所有的兩種鳶尾花都被正確分類了。
實驗:將上面數(shù)據(jù)劃分為訓(xùn)練數(shù)據(jù)和測試數(shù)據(jù),并在Perpetron_model類中定義score函數(shù),訓(xùn)練后利用score函數(shù)來輸出測試分?jǐn)?shù)
1. 數(shù)據(jù)讀取
import pandas as pd
import numpy as np
from sklearn.datasets import load_iris
import matplotlib.pyplot as plt
%matplotlib inline
# load data
iris = load_iris()
df = pd.DataFrame(iris.data, columns=iris.feature_names)
df['label'] = iris.target
df.columns=["sepal length","sepal width","petal length","petal width","label"]
data = np.array(df.iloc[:100, [0, 1, -1]])
X, y = data[:,:-1], data[:,-1]
y = np.array([1 if i == 1 else -1 for i in y])
2. 劃分訓(xùn)練數(shù)據(jù)和測試數(shù)據(jù)
from sklearn.model_selection import train_test_split
劃分訓(xùn)練數(shù)據(jù)和測試數(shù)據(jù)
X_train,X_test,y_train,y_test=train_test_split(X,y,test_size=0.2)
3. 定義感知器類
定義下面的實例方法score函數(shù)文章來源:http://www.zghlxwxcb.cn/news/detail-641557.html
class Perception_model:
def __init__(self,n):
self.w=np.zeros(n,dtype=np.float32)
self.b=0
self.l_rate=0.1
def sign(self,x):
y=np.dot(x,self.w)+self.b
return y
def fit(self,X_train,y_train):
is_wrong=True
while is_wrong:
is_wrong=False
for i in range(len(X_train)):
if y_train[i]*self.sign(X_train[i])<=0:
self.w=self.w+self.l_rate*np.dot(y_train[i],X_train[i])
self.b=self.b+self.l_rate*y_train[i]
is_wrong=True
def score(self,X_test,y_test):
accuracy=0
for i in range(len(X_test)):
if self.sign(X_test[i])<=0 and y_test[i]==-1:
accuracy+=1
if self.sign(X_test[i])>0 and y_test[i]==1:
accuracy+=1
return accuracy/len(X_test)
4. 實例化模型并訓(xùn)練模型
model_1=Perception_model(len(X_train[0]))
model_1.fit(X_train,y_train)
5. 測試模型
調(diào)用實例方法score函數(shù)文章來源地址http://www.zghlxwxcb.cn/news/detail-641557.html
model_1.score(X_test,y_test)
1.0
附:系列文章
實驗 | 目錄 | 直達(dá)鏈接 |
---|---|---|
1 | Numpy以及可視化回顧 | https://want595.blog.csdn.net/article/details/131891689 |
2 | 線性回歸 | https://want595.blog.csdn.net/article/details/131892463 |
3 | 邏輯回歸 | https://want595.blog.csdn.net/article/details/131912053 |
4 | 多分類實踐(基于邏輯回歸) | https://want595.blog.csdn.net/article/details/131913690 |
5 | 機器學(xué)習(xí)應(yīng)用實踐-手動調(diào)參 | https://want595.blog.csdn.net/article/details/131934812 |
6 | 貝葉斯推理 | https://want595.blog.csdn.net/article/details/131947040 |
7 | KNN最近鄰算法 | https://want595.blog.csdn.net/article/details/131947885 |
8 | K-means無監(jiān)督聚類 | https://want595.blog.csdn.net/article/details/131952371 |
9 | 決策樹 | https://want595.blog.csdn.net/article/details/131991014 |
10 | 隨機森林和集成學(xué)習(xí) | https://want595.blog.csdn.net/article/details/132003451 |
11 | 支持向量機 | https://want595.blog.csdn.net/article/details/132010861 |
12 | 神經(jīng)網(wǎng)絡(luò)-感知器 | https://want595.blog.csdn.net/article/details/132014769 |
13 | 基于神經(jīng)網(wǎng)絡(luò)的回歸-分類實驗 | https://want595.blog.csdn.net/article/details/132127413 |
14 | 手寫體卷積神經(jīng)網(wǎng)絡(luò) | https://want595.blog.csdn.net/article/details/132223494 |
15 | 將Lenet5應(yīng)用于Cifar10數(shù)據(jù)集 | https://want595.blog.csdn.net/article/details/132223751 |
16 | 卷積、下采樣、經(jīng)典卷積網(wǎng)絡(luò) | https://want595.blog.csdn.net/article/details/132223985 |
到了這里,關(guān)于【Python機器學(xué)習(xí)】實驗11 神經(jīng)網(wǎng)絡(luò)-感知器的文章就介紹完了。如果您還想了解更多內(nèi)容,請在右上角搜索TOY模板網(wǎng)以前的文章或繼續(xù)瀏覽下面的相關(guān)文章,希望大家以后多多支持TOY模板網(wǎng)!