0 前言
?? 這兩年開始畢業(yè)設(shè)計和畢業(yè)答辯的要求和難度不斷提升,傳統(tǒng)的畢設(shè)題目缺少創(chuàng)新和亮點,往往達不到畢業(yè)答辯的要求,這兩年不斷有學(xué)弟學(xué)妹告訴學(xué)長自己做的項目系統(tǒng)達不到老師的要求。
為了大家能夠順利以及最少的精力通過畢設(shè),學(xué)長分享優(yōu)質(zhì)畢業(yè)設(shè)計項目,今天要分享的是
?? 深度學(xué)習(xí)的水果識別 opencv python
??學(xué)長這里給一個題目綜合評分(每項滿分5分)
- 難度系數(shù):3分
- 工作量:3分
- 創(chuàng)新點:4分
?? 選題指導(dǎo), 項目分享:
https://gitee.com/dancheng-senior/project-sharing-1/blob/master/%E6%AF%95%E8%AE%BE%E6%8C%87%E5%AF%BC/README.md
2 開發(fā)簡介
深度學(xué)習(xí)作為機器學(xué)習(xí)領(lǐng)域內(nèi)新興并且蓬勃發(fā)展的一門學(xué)科, 它不僅改變著傳統(tǒng)的機器學(xué)習(xí)方法, 也影響著我們對人類感知的理解, 已經(jīng)在圖像識別和語音識別等領(lǐng)域取得廣泛的應(yīng)用。 因此, 本文在深入研究深度學(xué)習(xí)理論的基礎(chǔ)上, 將深度學(xué)習(xí)應(yīng)用到水果圖像識別中, 以此來提高了水果圖像的識別性能。
3 識別原理
3.1 傳統(tǒng)圖像識別原理
傳統(tǒng)的水果圖像識別系統(tǒng)的一般過程如下圖所示,主要工作集中在圖像預(yù)處理和特征提取階段。
在大多數(shù)的識別任務(wù)中, 實驗所用圖像往往是在嚴格限定的環(huán)境中采集的, 消除了外界環(huán)境對圖像的影響。 但是實際環(huán)境中圖像易受到光照變化、 水果反光、 遮擋等因素的影響, 這在不同程度上影響著水果圖像的識別準(zhǔn)確率。
在傳統(tǒng)的水果圖像識別系統(tǒng)中, 通常是對水果的紋理、 顏色、 形狀等特征進行提取和識別。
3.2 深度學(xué)習(xí)水果識別
CNN 是一種專門為識別二維特征而設(shè)計的多層神經(jīng)網(wǎng)絡(luò), 它的結(jié)構(gòu)如下圖所示,這種結(jié)構(gòu)對平移、 縮放、 旋轉(zhuǎn)等變形具有高度的不變性。
學(xué)長本次采用的 CNN 架構(gòu)如圖:
4 數(shù)據(jù)集
-
數(shù)據(jù)庫分為訓(xùn)練集(train)和測試集(test)兩部分
-
訓(xùn)練集包含四類apple,orange,banana,mixed(多種水果混合)四類237張圖片;測試集包含每類圖片各兩張。圖片集如下圖所示。
-
圖片類別可由圖片名稱中提取。
訓(xùn)練集圖片預(yù)覽
測試集預(yù)覽
數(shù)據(jù)集目錄結(jié)構(gòu)
5 部分關(guān)鍵代碼
5.1 處理訓(xùn)練集的數(shù)據(jù)結(jié)構(gòu)
import os
import pandas as pd
train_dir = './Training/'
test_dir = './Test/'
fruits = []
fruits_image = []
for i in os.listdir(train_dir):
for image_filename in os.listdir(train_dir + i):
fruits.append(i) # name of the fruit
fruits_image.append(i + '/' + image_filename)
train_fruits = pd.DataFrame(fruits, columns=["Fruits"])
train_fruits["Fruits Image"] = fruits_image
print(train_fruits)
5.2 模型網(wǎng)絡(luò)結(jié)構(gòu)
import matplotlib.pyplot as plt
import seaborn as sns
from keras.preprocessing.image import ImageDataGenerator, img_to_array, load_img
from glob import glob
from keras.models import Sequential
from keras.layers import Conv2D, MaxPooling2D, Activation, Dropout, Flatten, Dense
img = load_img(train_dir + "Cantaloupe 1/r_234_100.jpg")
plt.imshow(img)
plt.axis("off")
plt.show()
array_image = img_to_array(img)
# shape (100,100)
print("Image Shape --> ", array_image.shape)
# 131個類目
fruitCountUnique = glob(train_dir + '/*' )
numberOfClass = len(fruitCountUnique)
print("How many different fruits are there --> ",numberOfClass)
# 構(gòu)建模型
model = Sequential()
model.add(Conv2D(32,(3,3),input_shape = array_image.shape))
model.add(Activation("relu"))
model.add(MaxPooling2D())
model.add(Conv2D(32,(3,3)))
model.add(Activation("relu"))
model.add(MaxPooling2D())
model.add(Conv2D(64,(3,3)))
model.add(Activation("relu"))
model.add(MaxPooling2D())
model.add(Flatten())
model.add(Dense(1024))
model.add(Activation("relu"))
model.add(Dropout(0.5))
# 區(qū)分131類
model.add(Dense(numberOfClass)) # output
model.add(Activation("softmax"))
model.compile(loss = "categorical_crossentropy",
optimizer = "rmsprop",
metrics = ["accuracy"])
print("Target Size --> ", array_image.shape[:2])
5.3 訓(xùn)練模型
train_datagen = ImageDataGenerator(rescale= 1./255,
shear_range = 0.3,
horizontal_flip=True,
zoom_range = 0.3)
test_datagen = ImageDataGenerator(rescale= 1./255)
epochs = 100
batch_size = 32
train_generator = train_datagen.flow_from_directory(
train_dir,
target_size= array_image.shape[:2],
batch_size = batch_size,
color_mode= "rgb",
class_mode= "categorical")
test_generator = test_datagen.flow_from_directory(
test_dir,
target_size= array_image.shape[:2],
batch_size = batch_size,
color_mode= "rgb",
class_mode= "categorical")
for data_batch, labels_batch in train_generator:
print("data_batch shape --> ",data_batch.shape)
print("labels_batch shape --> ",labels_batch.shape)
break
hist = model.fit_generator(
generator = train_generator,
steps_per_epoch = 1600 // batch_size,
epochs=epochs,
validation_data = test_generator,
validation_steps = 800 // batch_size)
#保存模型 model_fruits.h5
model.save('model_fruits.h5')
順便輸出訓(xùn)練曲線
#展示損失模型結(jié)果
plt.figure()
plt.plot(hist.history["loss"],label = "Train Loss", color = "black")
plt.plot(hist.history["val_loss"],label = "Validation Loss", color = "darkred", linestyle="dashed",markeredgecolor = "purple", markeredgewidth = 2)
plt.title("Model Loss", color = "darkred", size = 13)
plt.legend()
plt.show()
#展示精確模型結(jié)果
plt.figure()
plt.plot(hist.history["accuracy"],label = "Train Accuracy", color = "black")
plt.plot(hist.history["val_accuracy"],label = "Validation Accuracy", color = "darkred", linestyle="dashed",markeredgecolor = "purple", markeredgewidth = 2)
plt.title("Model Accuracy", color = "darkred", size = 13)
plt.legend()
plt.show()
6 識別效果
from tensorflow.keras.models import load_model
import os
import pandas as pd
from keras.preprocessing.image import ImageDataGenerator,img_to_array, load_img
import cv2,matplotlib.pyplot as plt,numpy as np
from keras.preprocessing import image
train_datagen = ImageDataGenerator(rescale= 1./255,
shear_range = 0.3,
horizontal_flip=True,
zoom_range = 0.3)
model = load_model('model_fruits.h5')
batch_size = 32
img = load_img("./Test/Apricot/3_100.jpg",target_size=(100,100))
plt.imshow(img)
plt.show()
array_image = img_to_array(img)
array_image = array_image * 1./255
x = np.expand_dims(array_image, axis=0)
images = np.vstack([x])
classes = model.predict_classes(images, batch_size=10)
print(classes)
train_dir = './Training/'
train_generator = train_datagen.flow_from_directory(
train_dir,
target_size= array_image.shape[:2],
batch_size = batch_size,
color_mode= "rgb",
class_mode= "categorical”)
print(train_generator.class_indices)
文章來源:http://www.zghlxwxcb.cn/news/detail-411525.html
fig = plt.figure(figsize=(16, 16))
axes = []
files = []
predictions = []
true_labels = []
rows = 5
cols = 2
# 隨機選擇幾個圖片
def getRandomImage(path, img_width, img_height):
"""function loads a random image from a random folder in our test path"""
folders = list(filter(lambda x: os.path.isdir(os.path.join(path, x)), os.listdir(path)))
random_directory = np.random.randint(0, len(folders))
path_class = folders[random_directory]
file_path = os.path.join(path, path_class)
file_names = [f for f in os.listdir(file_path) if os.path.isfile(os.path.join(file_path, f))]
random_file_index = np.random.randint(0, len(file_names))
image_name = file_names[random_file_index]
final_path = os.path.join(file_path, image_name)
return image.load_img(final_path, target_size = (img_width, img_height)), final_path, path_class
def draw_test(name, pred, im, true_label):
BLACK = [0, 0, 0]
expanded_image = cv2.copyMakeBorder(im, 160, 0, 0, 300, cv2.BORDER_CONSTANT, value=BLACK)
cv2.putText(expanded_image, "predicted: " + pred, (20, 60), cv2.FONT_HERSHEY_SIMPLEX,
0.85, (255, 0, 0), 2)
cv2.putText(expanded_image, "true: " + true_label, (20, 120), cv2.FONT_HERSHEY_SIMPLEX,
0.85, (0, 255, 0), 2)
return expanded_image
IMG_ROWS, IMG_COLS = 100, 100
# predicting images
for i in range(0, 10):
path = "./Test"
img, final_path, true_label = getRandomImage(path, IMG_ROWS, IMG_COLS)
files.append(final_path)
true_labels.append(true_label)
x = image.img_to_array(img)
x = x * 1./255
x = np.expand_dims(x, axis=0)
images = np.vstack([x])
classes = model.predict_classes(images, batch_size=10)
predictions.append(classes)
class_labels = train_generator.class_indices
class_labels = {v: k for k, v in class_labels.items()}
class_list = list(class_labels.values())
for i in range(0, len(files)):
image = cv2.imread(files[i])
image = draw_test("Prediction", class_labels[predictions[i][0]], image, true_labels[i])
axes.append(fig.add_subplot(rows, cols, i+1))
plt.imshow(cv2.cvtColor(image, cv2.COLOR_BGR2RGB))
plt.grid(False)
plt.axis('off')
plt.show()
文章來源地址http://www.zghlxwxcb.cn/news/detail-411525.html
7 最后
到了這里,關(guān)于畢設(shè) 深度學(xué)習(xí)的水果識別 opencv python的文章就介紹完了。如果您還想了解更多內(nèi)容,請在右上角搜索TOY模板網(wǎng)以前的文章或繼續(xù)瀏覽下面的相關(guān)文章,希望大家以后多多支持TOY模板網(wǎng)!