功能演示:
基于vgg16,resnet50卷積神經(jīng)網(wǎng)絡(luò)的水果成熟度識別,可識別蘋果,香蕉,草莓,荔枝和芒果的成熟度(pytorch框架)_嗶哩嗶哩_bilibilihttps://www.bilibili.com/video/BV1ae411C7N5/?spm_id_from=333.999.0.0&vd_source=95b9b70984596ccebdb2780f0601b78b
?(一)簡介
?基于卷積神經(jīng)網(wǎng)絡(luò)的水果成熟度識別系統(tǒng)是在pytorch框架下實現(xiàn)的,系統(tǒng)中有兩個模型可選resnet50模型和VGG16模型,這兩個模型可用于模型效果對比。該系統(tǒng)涉及的技術(shù)棧有,UI界面:python + pyqt5,前端界面:python flask + vue?
該項目是在pycharm和anaconda搭建的虛擬環(huán)境執(zhí)行,pycharm和anaconda安裝和配置可觀看教程:
超詳細(xì)的pycharm+anaconda搭建python虛擬環(huán)境_pycharm配置anaconda虛擬環(huán)境-CSDN博客
pycharm+anaconda搭建python虛擬環(huán)境_嗶哩嗶哩_bilibili
(二)項目介紹
1. pycharm打開項目界面如下
?
2. 數(shù)據(jù)集
?
3.GUI界面(技術(shù)棧:pyqt5+python)
?
4.前端界面(技術(shù)棧:vue+python)
5. 核心代碼
class MainProcess:
def __init__(self, train_path, test_path, model_name):
self.train_path = train_path
self.test_path = test_path
self.model_name = model_name
self.device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
def main(self, epochs):
# 記錄訓(xùn)練過程
log_file_name = './results/vgg16訓(xùn)練和驗證過程.txt'
# 記錄正常的 print 信息
sys.stdout = Logger(log_file_name)
print("using {} device.".format(self.device))
# 開始訓(xùn)練,記錄開始時間
begin_time = time()
# 加載數(shù)據(jù)
train_loader, validate_loader, class_names, train_num, val_num = self.data_load()
print("class_names: ", class_names)
train_steps = len(train_loader)
val_steps = len(validate_loader)
# 加載模型
model = self.model_load() # 創(chuàng)建模型
# 網(wǎng)絡(luò)結(jié)構(gòu)可視化
x = torch.randn(16, 3, 224, 224) # 隨機(jī)生成一個輸入
model_visual_path = 'results/vgg16_visual.onnx' # 模型結(jié)構(gòu)保存路徑
torch.onnx.export(model, x, model_visual_path) # 將 pytorch 模型以 onnx 格式導(dǎo)出并保存
# netron.start(model_visual_path) # 瀏覽器會自動打開網(wǎng)絡(luò)結(jié)構(gòu)
# load pretrain weights
# download url: https://download.pytorch.org/models/vgg16-397923af.pth
model_weight_path = "models/vgg16-pre.pth"
assert os.path.exists(model_weight_path), "file {} does not exist.".format(model_weight_path)
model.load_state_dict(torch.load(model_weight_path, map_location='cpu'))
# 更改Vgg16模型的最后一層
model.classifier[-1] = nn.Linear(4096, len(class_names), bias=True)
# 將模型放入GPU中
model.to(self.device)
# 定義損失函數(shù)
loss_function = nn.CrossEntropyLoss()
# 定義優(yōu)化器
params = [p for p in model.parameters() if p.requires_grad]
optimizer = optim.Adam(params=params, lr=0.0001)
train_loss_history, train_acc_history = [], []
test_loss_history, test_acc_history = [], []
best_acc = 0.0
for epoch in range(0, epochs):
# 下面是模型訓(xùn)練
model.train()
running_loss = 0.0
train_acc = 0.0
train_bar = tqdm(train_loader, file=sys.stdout)
# 進(jìn)來一個batch的數(shù)據(jù),計算一次梯度,更新一次網(wǎng)絡(luò)
for step, data in enumerate(train_bar):
images, labels = data # 獲取圖像及對應(yīng)的真實標(biāo)簽
optimizer.zero_grad() # 清空過往梯度
outputs = model(images.to(self.device)) # 得到預(yù)測的標(biāo)簽
train_loss = loss_function(outputs, labels.to(self.device)) # 計算損失
train_loss.backward() # 反向傳播,計算當(dāng)前梯度
optimizer.step() # 根據(jù)梯度更新網(wǎng)絡(luò)參數(shù)
# print statistics
running_loss += train_loss.item()
predict_y = torch.max(outputs, dim=1)[1] # 每行最大值的索引
# torch.eq()進(jìn)行逐元素的比較,若相同位置的兩個元素相同,則返回True;若不同,返回False
train_acc += torch.eq(predict_y, labels.to(self.device)).sum().item()
train_bar.desc = "train epoch[{}/{}] loss:{:.3f}".format(epoch + 1,
epochs,
train_loss)
# 下面是模型驗證
model.eval() # 不啟用 BatchNormalization 和 Dropout,保證BN和dropout不發(fā)生變化
val_acc = 0.0 # accumulate accurate number / epoch
testing_loss = 0.0
with torch.no_grad(): # 張量的計算過程中無需計算梯度
val_bar = tqdm(validate_loader, file=sys.stdout)
for val_data in val_bar:
val_images, val_labels = val_data
outputs = model(val_images.to(self.device))
val_loss = loss_function(outputs, val_labels.to(self.device)) # 計算損失
testing_loss += val_loss.item()
predict_y = torch.max(outputs, dim=1)[1] # 每行最大值的索引
# torch.eq()進(jìn)行逐元素的比較,若相同位置的兩個元素相同,則返回True;若不同,返回False
val_acc += torch.eq(predict_y, val_labels.to(self.device)).sum().item()
train_loss = running_loss / train_steps
train_accurate = train_acc / train_num
test_loss = testing_loss / val_steps
val_accurate = val_acc / val_num
train_loss_history.append(train_loss)
train_acc_history.append(train_accurate)
test_loss_history.append(test_loss)
test_acc_history.append(val_accurate)
print('[epoch %d] train_loss: %.3f val_accuracy: %.3f' %
(epoch + 1, train_loss, val_accurate))
if val_accurate > best_acc:
best_acc = val_accurate
torch.save(model.state_dict(), self.model_name)
# 記錄結(jié)束時間
end_time = time()
run_time = end_time - begin_time
print('該循環(huán)程序運(yùn)行時間:', run_time, "s")
# 繪制模型訓(xùn)練過程圖
self.show_loss_acc(train_loss_history, train_acc_history,
test_loss_history, test_acc_history)
# 畫熱力圖
self.heatmaps(model, validate_loader, class_names)
該系統(tǒng)可以訓(xùn)練自己的數(shù)據(jù)集,訓(xùn)練過程也比較簡單,只需指定自己數(shù)據(jù)集中訓(xùn)練集和測試集的路徑,訓(xùn)練后模型名稱和指定訓(xùn)練的輪數(shù)即可?
訓(xùn)練結(jié)束后可輸出以下結(jié)果:
a. 訓(xùn)練過程的損失曲線
b. 模型訓(xùn)練過程記錄,模型每一輪訓(xùn)練的損失和精度數(shù)值記錄
c. 模型結(jié)構(gòu)
??
模型評估可輸出:
a. 混淆矩陣?
?
b. 測試過程和精度數(shù)值?
(三)資源獲取方式
編碼不易,源碼有償獲取喔!
文章來源:http://www.zghlxwxcb.cn/news/detail-818460.html
資源主要包括以下內(nèi)容:完整的程序代碼文件、訓(xùn)練好的模型、數(shù)據(jù)集、UI界面、前端界面。歡迎大家咨詢!?文章來源地址http://www.zghlxwxcb.cn/news/detail-818460.html
到了這里,關(guān)于基于卷積神經(jīng)網(wǎng)絡(luò)的水果成熟度識別(pytorch框架)【python源碼+UI界面+前端界面+功能源碼詳解】的文章就介紹完了。如果您還想了解更多內(nèi)容,請在右上角搜索TOY模板網(wǎng)以前的文章或繼續(xù)瀏覽下面的相關(guān)文章,希望大家以后多多支持TOY模板網(wǎng)!