使用TensorFlow完成線性回歸
TensorFlow是由Google開(kāi)發(fā)的一個(gè)開(kāi)源的機(jī)器學(xué)習(xí)框架。它可以讓開(kāi)發(fā)者更加輕松地構(gòu)建和訓(xùn)練深度學(xué)習(xí)模型,從而解決各種自然語(yǔ)言處理、計(jì)算機(jī)視覺(jué)、語(yǔ)音識(shí)別、推薦系統(tǒng)等領(lǐng)域的問(wèn)題。
TensorFlow的主要特點(diǎn)是靈活性和可伸縮性。它實(shí)現(xiàn)了一種基于數(shù)據(jù)流圖的計(jì)算模型,使得用戶可以定義自己的計(jì)算圖,控制模型的計(jì)算過(guò)程。同時(shí),TensorFlow支持分布式計(jì)算,使得用戶可以在多臺(tái)機(jī)器上運(yùn)行大規(guī)模計(jì)算任務(wù),從而提高計(jì)算效率。
TensorFlow包含了許多高級(jí)API,例如Keras和Estimator,使得用戶可以更加輕松地構(gòu)建和訓(xùn)練深度學(xué)習(xí)模型。Keras提供了一個(gè)易于使用的高級(jí)API,使得用戶可以在不需要深入了解TensorFlow的情況下,構(gòu)建和訓(xùn)練深度學(xué)習(xí)模型。Estimator則提供了一種更加低級(jí)的API,使得用戶可以更加靈活地定義模型的結(jié)構(gòu)和訓(xùn)練過(guò)程。
TensorFlow還提供了一個(gè)交互式開(kāi)發(fā)環(huán)境,稱為TensorBoard,可以幫助用戶可視化模型的計(jì)算圖、訓(xùn)練過(guò)程和性能指標(biāo),從而更加直觀地理解和調(diào)試深度學(xué)習(xí)模型。
由于TensorFlow的靈活性和可伸縮性,它已經(jīng)被廣泛應(yīng)用于各個(gè)領(lǐng)域,包括自然語(yǔ)言處理、計(jì)算機(jī)視覺(jué)、語(yǔ)音識(shí)別、推薦系統(tǒng)等。例如,在自然語(yǔ)言處理領(lǐng)域,TensorFlow被用于構(gòu)建和訓(xùn)練各種強(qiáng)大的模型,例如機(jī)器翻譯模型、文本分類模型、語(yǔ)言生成模型等。
總的來(lái)說(shuō),TensorFlow是一個(gè)強(qiáng)大的機(jī)器學(xué)習(xí)框架,可以幫助用戶更加輕松地構(gòu)建和訓(xùn)練深度學(xué)習(xí)模型。隨著深度學(xué)習(xí)技術(shù)的不斷發(fā)展,TensorFlow將繼續(xù)發(fā)揮重要的作用,推動(dòng)各個(gè)領(lǐng)域的發(fā)展和創(chuàng)新。
1. 導(dǎo)入TensorFlow庫(kù)
# 導(dǎo)入相關(guān)庫(kù)
%matplotlib inline
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
2. 構(gòu)造數(shù)據(jù)集
# 產(chǎn)出樣本點(diǎn)個(gè)數(shù)
n_observations = 100
# 產(chǎn)出-3~3之間的樣本點(diǎn)
xs = np.linspace(-3, 3, n_observations)
# sin擾動(dòng)
ys = np.sin(xs) + np.random.uniform(-0.5, 0.5, n_observations)
xs
array([-3. , -2.93939394, -2.87878788, -2.81818182, -2.75757576,
-2.6969697 , -2.63636364, -2.57575758, -2.51515152, -2.45454545,
-2.39393939, -2.33333333, -2.27272727, -2.21212121, -2.15151515,
-2.09090909, -2.03030303, -1.96969697, -1.90909091, -1.84848485,
-1.78787879, -1.72727273, -1.66666667, -1.60606061, -1.54545455,
-1.48484848, -1.42424242, -1.36363636, -1.3030303 , -1.24242424,
-1.18181818, -1.12121212, -1.06060606, -1. , -0.93939394,
-0.87878788, -0.81818182, -0.75757576, -0.6969697 , -0.63636364,
-0.57575758, -0.51515152, -0.45454545, -0.39393939, -0.33333333,
-0.27272727, -0.21212121, -0.15151515, -0.09090909, -0.03030303,
0.03030303, 0.09090909, 0.15151515, 0.21212121, 0.27272727,
0.33333333, 0.39393939, 0.45454545, 0.51515152, 0.57575758,
0.63636364, 0.6969697 , 0.75757576, 0.81818182, 0.87878788,
0.93939394, 1. , 1.06060606, 1.12121212, 1.18181818,
1.24242424, 1.3030303 , 1.36363636, 1.42424242, 1.48484848,
1.54545455, 1.60606061, 1.66666667, 1.72727273, 1.78787879,
1.84848485, 1.90909091, 1.96969697, 2.03030303, 2.09090909,
2.15151515, 2.21212121, 2.27272727, 2.33333333, 2.39393939,
2.45454545, 2.51515152, 2.57575758, 2.63636364, 2.6969697 ,
2.75757576, 2.81818182, 2.87878788, 2.93939394, 3. ])
ys
array([-0.62568008, 0.01486274, -0.29232541, -0.05271084,
-0.53407957,
-0.37199581, -0.40235236, -0.80005504, -0.2280913 , -0.96111433,
-0.58732159, -0.71310851, -1.19817878, -0.93036437, -1.02682804,
-1.33669261, -1.36873043, -0.44500172, -1.38769079, -0.52899793,
-0.78090929, -1.1470421 , -0.79274726, -0.95139505, -1.3536293 ,
-1.15097615, -1.04909201, -0.89071026, -0.81181765, -0.70292996,
-0.49732344, -1.22800179, -1.21280414, -0.59583172, -1.05027515,
-0.56369191, -0.68680323, -0.20454038, -0.32429566, -0.84640122,
-0.08175012, -0.76910728, -0.59206189, -0.09984673, -0.52465978,
-0.30498277, 0.08593627, -0.29488864, 0.24698113, -0.07324925,
0.12773032, 0.55508531, 0.14794648, 0.40155342, 0.31717698,
0.63213964, 0.35736413, 0.05264068, 0.39858619, 1.00710311,
0.73844747, 1.12858026, 0.59779567, 1.22131999, 0.80849061,
0.72796849, 1.0990044 , 0.45447096, 1.15217952, 1.31846002,
1.27140258, 0.65264777, 1.15205186, 0.90705463, 0.82489198,
0.50572125, 1.47115594, 0.98209434, 0.95763951, 0.50225094,
1.40415029, 0.74618984, 0.90620692, 0.40593222, 0.62737999,
1.05236579, 1.20041249, 1.14784273, 0.54798933, 0.18167682,
0.50830766, 0.92498585, 0.9778136 , 0.42331405, 0.88163729,
0.67235809, -0.00539421, -0.06219493, 0.26436412, 0.51978602])
# 可視化圖長(zhǎng)和寬
plt.rcParams["figure.figsize"] = (6,4)
# 繪制散點(diǎn)圖
plt.scatter(xs, ys)
plt.show()
文章來(lái)源:http://www.zghlxwxcb.cn/news/detail-691253.html
3. 定義基本模型
# 占位
X = tf.placeholder(tf.float32, name='X')
Y = tf.placeholder(tf.float32, name='Y')
# 隨機(jī)采樣出變量
W = tf.Variable(tf.random_normal([1]), name='weight')
b = tf.Variable(tf.random_normal([1]), name='bias')
# 手寫y = wx+b
Y_pred = tf.add(tf.multiply(X, W), b)
# 定義損失函數(shù)mse
loss = tf.square(Y - Y_pred, name='loss')
# 學(xué)習(xí)率
learning_rate = 0.01
# 優(yōu)化器,就是tensorflow中梯度下降的策略
# 定義梯度下降,申明學(xué)習(xí)率和針對(duì)那個(gè)loss求最小化
optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(loss)
4. 訓(xùn)練模型
# 去樣本數(shù)量
n_samples = xs.shape[0]
init = tf.global_variables_initializer()
with tf.Session() as sess:
# 記得初始化所有變量
sess.run(init)
writer = tf.summary.FileWriter('../graphs/linear_reg', sess.graph)
# 訓(xùn)練模型
for i in range(50):
#初始化損失函數(shù)
total_loss = 0
for x, y in zip(xs, ys):
# 通過(guò)feed_dic把數(shù)據(jù)灌進(jìn)去
_, l = sess.run([optimizer, loss], feed_dict={X: x, Y:y}) #_是optimizer的返回,在這沒(méi)有用就省略
total_loss += l #統(tǒng)計(jì)每輪樣本的損失
print('Epoch {0}: {1}'.format(i, total_loss/n_samples)) #求損失平均
# 關(guān)閉writer
writer.close()
# 取出w和b的值
W, b = sess.run([W, b])
Epoch 0: [0.48447946]
Epoch 1: [0.20947962]
Epoch 2: [0.19649307]
Epoch 3: [0.19527708]
Epoch 4: [0.19514856]
Epoch 5: [0.19513479]
Epoch 6: [0.19513334]
Epoch 7: [0.19513316]
Epoch 8: [0.19513315]
Epoch 9: [0.19513315]
Epoch 10: [0.19513315]
Epoch 11: [0.19513315]
Epoch 12: [0.19513315]
Epoch 13: [0.19513315]
Epoch 14: [0.19513315]
Epoch 15: [0.19513315]
Epoch 16: [0.19513315]
Epoch 17: [0.19513315]
Epoch 18: [0.19513315]
Epoch 19: [0.19513315]
Epoch 20: [0.19513315]
Epoch 21: [0.19513315]
Epoch 22: [0.19513315]
Epoch 23: [0.19513315]
Epoch 24: [0.19513315]
Epoch 25: [0.19513315]
Epoch 26: [0.19513315]
Epoch 27: [0.19513315]
Epoch 28: [0.19513315]
Epoch 29: [0.19513315]
Epoch 30: [0.19513315]
Epoch 31: [0.19513315]
Epoch 32: [0.19513315]
Epoch 33: [0.19513315]
Epoch 34: [0.19513315]
Epoch 35: [0.19513315]
Epoch 36: [0.19513315]
Epoch 37: [0.19513315]
Epoch 38: [0.19513315]
Epoch 39: [0.19513315]
Epoch 40: [0.19513315]
Epoch 41: [0.19513315]
Epoch 42: [0.19513315]
Epoch 43: [0.19513315]
Epoch 44: [0.19513315]
Epoch 45: [0.19513315]
Epoch 46: [0.19513315]
Epoch 47: [0.19513315]
Epoch 48: [0.19513315]
Epoch 49: [0.19513315]
print(W,b)
print("W:"+str(W[0]))
print("b:"+str(b[0]))
[0.23069778] [-0.12590201]
W:0.23069778
b:-0.12590201
5. 線性回歸圖
# 線性回歸圖
plt.plot(xs, ys, 'bo', label='Real data')
plt.plot(xs, xs * W + b, 'r', label='Predicted data')
plt.legend()
plt.show()
文章來(lái)源地址http://www.zghlxwxcb.cn/news/detail-691253.html
附:系列文章
序號(hào) | 文章目錄 | 直達(dá)鏈接 |
---|---|---|
1 | 波士頓房?jī)r(jià)預(yù)測(cè) | https://want595.blog.csdn.net/article/details/132181950 |
2 | 鳶尾花數(shù)據(jù)集分析 | https://want595.blog.csdn.net/article/details/132182057 |
3 | 特征處理 | https://want595.blog.csdn.net/article/details/132182165 |
4 | 交叉驗(yàn)證 | https://want595.blog.csdn.net/article/details/132182238 |
5 | 構(gòu)造神經(jīng)網(wǎng)絡(luò)示例 | https://want595.blog.csdn.net/article/details/132182341 |
6 | 使用TensorFlow完成線性回歸 | https://want595.blog.csdn.net/article/details/132182417 |
7 | 使用TensorFlow完成邏輯回歸 | https://want595.blog.csdn.net/article/details/132182496 |
8 | TensorBoard案例 | https://want595.blog.csdn.net/article/details/132182584 |
9 | 使用Keras完成線性回歸 | https://want595.blog.csdn.net/article/details/132182723 |
10 | 使用Keras完成邏輯回歸 | https://want595.blog.csdn.net/article/details/132182795 |
11 | 使用Keras預(yù)訓(xùn)練模型完成貓狗識(shí)別 | https://want595.blog.csdn.net/article/details/132243928 |
12 | 使用PyTorch訓(xùn)練模型 | https://want595.blog.csdn.net/article/details/132243989 |
13 | 使用Dropout抑制過(guò)擬合 | https://want595.blog.csdn.net/article/details/132244111 |
14 | 使用CNN完成MNIST手寫體識(shí)別(TensorFlow) | https://want595.blog.csdn.net/article/details/132244499 |
15 | 使用CNN完成MNIST手寫體識(shí)別(Keras) | https://want595.blog.csdn.net/article/details/132244552 |
16 | 使用CNN完成MNIST手寫體識(shí)別(PyTorch) | https://want595.blog.csdn.net/article/details/132244641 |
17 | 使用GAN生成手寫數(shù)字樣本 | https://want595.blog.csdn.net/article/details/132244764 |
18 | 自然語(yǔ)言處理 | https://want595.blog.csdn.net/article/details/132276591 |
到了這里,關(guān)于【深度學(xué)習(xí)】實(shí)驗(yàn)06 使用TensorFlow完成線性回歸的文章就介紹完了。如果您還想了解更多內(nèi)容,請(qǐng)?jiān)谟疑辖撬阉鱐OY模板網(wǎng)以前的文章或繼續(xù)瀏覽下面的相關(guān)文章,希望大家以后多多支持TOY模板網(wǎng)!