国产 无码 综合区,色欲AV无码国产永久播放,无码天堂亚洲国产AV,国产日韩欧美女同一区二区

課程設(shè)計(jì)——基于opencv的手勢(shì)識(shí)別【真】完整項(xiàng)目

這篇具有很好參考價(jià)值的文章主要介紹了課程設(shè)計(jì)——基于opencv的手勢(shì)識(shí)別【真】完整項(xiàng)目。希望對(duì)大家有所幫助。如果存在錯(cuò)誤或未考慮完全的地方,請(qǐng)大家不吝賜教,您也可以點(diǎn)擊"舉報(bào)違法"按鈕提交疑問(wèn)。

前言

一個(gè)簡(jiǎn)單的手勢(shì)識(shí)別,過(guò)程很簡(jiǎn)單,主要用到了opencvsklearntkinter三個(gè)庫(kù),下面我將會(huì)展示整個(gè)項(xiàng)目的代碼和簡(jiǎn)要說(shuō)明,并且下面將會(huì)是完整的已經(jīng)全部集成在三個(gè).py文件的代碼,你只需要將三個(gè)文件分別執(zhí)行就可以訓(xùn)練出自己的手勢(shì)識(shí)別模型

項(xiàng)目思想:

  1. 通過(guò)顏色尋找圖像中手的輪廓
  2. 由輪廓得到一串傅里葉描述子作為一個(gè)樣本
  3. 利用多個(gè)樣本構(gòu)成的數(shù)據(jù)集,在使用SVM支持向量機(jī)完成分類(lèi)工作

01 環(huán)境配置

python版本:3.7
requirements.txt包文件內(nèi)容如下:

certifi @ file:///C:/b/abs_85o_6fm0se/croot/certifi_1671487778835/work/certifi  
et-xmlfile==1.1.0  
imageio==2.28.1  
joblib==1.2.0  
networkx==2.6.3  
numpy==1.21.6  
opencv-contrib-python==4.7.0.72  
opencv-python==4.7.0.72  
openpyxl==3.1.2  
packaging==23.1  
pandas==1.3.5  
Pillow==9.5.0  
python-dateutil==2.8.2  
pytz==2023.3  
PyWavelets==1.3.0  
scikit-image==0.19.3  
scikit-learn==1.0.2  
scipy==1.7.3  
six==1.16.0  
threadpoolctl==3.1.0  
tifffile==2021.11.2  
wincertstore==0.2

通過(guò)終端命令:pip install -r requirements.txt -i https://pypi.douban.com/simple安裝所需模塊

02 第一個(gè)py文件:獲取圖像數(shù)據(jù)

我們要對(duì)每一個(gè)手勢(shì)獲取一些圖片作為樣本,因?yàn)楸旧硎腔谳喞乃晕覀儠?huì)將輪廓圖像也展示出來(lái)便于參考截取。

  1. 導(dǎo)入模塊:
from tkinter import *  
from skimage import io, transform  
import threading  
import cv2  
import warnings  
import time  
import os  
import numpy as np  

warnings.simplefilter('ignore')
  1. 添加通過(guò)皮膚檢測(cè)獲取手部圖像的函數(shù)
def binaryMask(frame, x0, y0, width, height):  
    frame1 = cv2.rectangle(frame, (x0, y0), (x0 + width, y0 + height), (0, 255, 0))  # 畫(huà)出截取的手勢(shì)框圖  
    roi = frame[y0:y0 + height, x0:x0 + width]  # roi=手勢(shì)框圖  
    # cv2.imshow("roi", roi)  # 顯示手勢(shì)框圖  
    res = skinMask(roi)  # 進(jìn)行膚色檢測(cè)  
    kernel = np.ones((5, 5), np.uint8)  # 設(shè)置卷積核  
    erosion = cv2.erode(res, kernel)  # 腐蝕操作  
    res = cv2.dilate(erosion, kernel)  # 對(duì)res膨脹操作 dilation    
    # ret, fourier_result = fd.fourierDesciptor(res)    
    return frame1, res  
    # return frame1, roi, res, ret, fourier_result  
    # cv2.imshow("res", res)  # res是roi顯示膚色檢測(cè)后的圖像  
  
  
def skinMask(roi):  
    YCrCb = cv2.cvtColor(roi, cv2.COLOR_BGR2YCR_CB)  # 轉(zhuǎn)換至YCrCb空間  
    (y, cr, cb) = cv2.split(YCrCb)  # 拆分出Y,Cr,Cb值  
    cr1 = cv2.GaussianBlur(cr, (5, 5), 0)  
    _, skin = cv2.threshold(cr1, 0, 255, cv2.THRESH_BINARY + cv2.THRESH_OTSU)  # Ostu處理  
    res = cv2.bitwise_and(roi, roi, mask=skin)  
    return res
  1. 添加獲取傅里葉描述子的函數(shù),就是我們通過(guò)處理好的手部圖像輪廓獲取數(shù)據(jù)樣本的部分:
MIN_DESCRIPTOR = 32  # surprisingly enough, 2 descriptors are already enough  
  
##計(jì)算傅里葉描述子  
def fourierDesciptor(res):  
    # Laplacian算子進(jìn)行八鄰域檢測(cè)  
    gray = cv2.cvtColor(res, cv2.COLOR_BGR2GRAY)  
    dst = cv2.Laplacian(gray, cv2.CV_16S, ksize=3)  
    Laplacian = cv2.convertScaleAbs(dst)  
    contour = find_contours(Laplacian)  # 提取輪廓點(diǎn)坐標(biāo)  
    contour_array = contour[0][:, 0, :]  # 注意這里只保留區(qū)域面積最大的輪廓點(diǎn)坐標(biāo)  
    ret_np = np.ones(dst.shape, np.uint8)  # 創(chuàng)建黑色幕布  
    ret = cv2.drawContours(ret_np, contour[0], -1, (255, 255, 255), 1)  # 繪制白色輪廓  
    contours_complex = np.empty(contour_array.shape[:-1], dtype=complex)  
    contours_complex.real = contour_array[:, 0]  # 橫坐標(biāo)作為實(shí)數(shù)部分  
    contours_complex.imag = contour_array[:, 1]  # 縱坐標(biāo)作為虛數(shù)部分  
    fourier_result = np.fft.fft(contours_complex)  # 進(jìn)行傅里葉變換  
    # fourier_result = np.fft.fftshift(fourier_result)  
    descirptor_in_use = truncate_descriptor(fourier_result)  # 截短傅里葉描述子  
    # reconstruct(ret, descirptor_in_use)  
    return ret, descirptor_in_use  
    # return descirptor_in_use  
  
def find_contours(Laplacian):  
    #binaryimg = cv2.Canny(res, 50, 200) #二值化,canny檢測(cè)  
    h_c,h_i= cv2.findContours(Laplacian,cv2.RETR_EXTERNAL,cv2.CHAIN_APPROX_NONE) #尋找輪廓  
    contour = sorted(h_c,key=cv2.contourArea, reverse=True)#對(duì)一系列輪廓點(diǎn)坐標(biāo)按它們圍成的區(qū)域面積進(jìn)行排序  
    return contour  
  
# 截短傅里葉描述子  
def truncate_descriptor(fourier_result):  
    descriptors_in_use = np.fft.fftshift(fourier_result)  
  
    # 取中間的MIN_DESCRIPTOR項(xiàng)描述子  
    center_index = int(len(descriptors_in_use) / 2)  
    low, high = center_index - int(MIN_DESCRIPTOR / 2), center_index + int(MIN_DESCRIPTOR / 2)  
    descriptors_in_use = descriptors_in_use[low:high]  
  
    descriptors_in_use = np.fft.ifftshift(descriptors_in_use)  
    return descriptors_in_use  
  
  
##由傅里葉描述子重建輪廓圖  
def reconstruct(img, descirptor_in_use):  
    # descirptor_in_use = truncate_descriptor(fourier_result, degree)  
    # descirptor_in_use = np.fft.ifftshift(fourier_result)    
    # descirptor_in_use = truncate_descriptor(fourier_result)    
    # print(descirptor_in_use)    
    contour_reconstruct = np.fft.ifft(descirptor_in_use)  
    contour_reconstruct = np.array([contour_reconstruct.real,  
                                    contour_reconstruct.imag])  
    contour_reconstruct = np.transpose(contour_reconstruct)  
    contour_reconstruct = np.expand_dims(contour_reconstruct, axis=1)  
    if contour_reconstruct.min() < 0:  
        contour_reconstruct -= contour_reconstruct.min()  
    contour_reconstruct *= img.shape[0] / contour_reconstruct.max()  
    contour_reconstruct = contour_reconstruct.astype(np.int32, copy=False)  
  
    black_np = np.ones(img.shape, np.uint8)  # 創(chuàng)建黑色幕布  
    black = cv2.drawContours(black_np, contour_reconstruct, -1, (255, 255, 255), 10)  # 繪制白色輪廓  
    # cv2.imshow("contour_reconstruct", black)  
    # cv2.imwrite('recover.png',black)    
    return black
  1. 添加基于tkinterGUI的類(lèi),類(lèi)中主要使用多線程來(lái)展示相機(jī)拍攝到的視頻和轉(zhuǎn)換為輪廓圖像的視頻,以及一些簡(jiǎn)單的截取和記錄功能
# 創(chuàng)建獲取樣本的類(lèi)  
class Application(Frame):  
    def __init__(self, master=None):  
        super().__init__(master)  
        self.master = master  
        self.pack()  
        self.isstart = False  
        self.num = 1  # 記錄圖片數(shù)  
        self.label = 1  
        self.cnt = 1  
        self.single_sum = 20  # 設(shè)置單個(gè)手勢(shì)需要的圖片數(shù)  
        self.time = time.time()  # 生成一個(gè)文件夾的后綴  
        self.create_widget()  
  
    def create_widget(self):  
        # 添加組件  
        self.button01 = Button(master=self.master, text='開(kāi)啟攝像頭', command=self.video)  
        self.button01.place(x=180, y=470+10)  
        self.button001 = Button(master=self.master, text='關(guān)閉程序', command=self.over)  
        self.button001.place(x=270, y=470+10)  
        self.label01 = Label(master=self.master)  
        self.label01.place(x=30, y=50)  
  
        self.button02 = Button(master=self.master, text='截取圖像', command=self.cut_image)  
        self.button02.place(x=680, y=470+10)  
        self.v1 = StringVar(self.master, '保存為:1_1.png')  
        self.label_cut = Label(master=self.master, textvariable=self.v1)  
        self.label_cut.place(x=750, y=470+10)  
        self.label02 = Label(master=self.master)  
        self.label02.place(x=630, y=50)  
  
        self.label_num = Label(master=self.master, text='單類(lèi)圖片數(shù):')  
        self.label_num.place(x=750, y=470+30)  
        self.v2 = StringVar(self.master, f"{self.single_sum}")  
        self.entry = Entry(master=self.master, textvariable=self.v2)  
        self.entry.place(x=820, y=470+30)  
  
    def cut_image(self):  
        if self.isstart:  
            if not os.path.exists(f'./images-{int(self.v2.get())}_{int(self.time)}'):  
                os.mkdir(f'./images-{int(self.v2.get())}_{int(self.time)}')  
            io.imsave(f'./images-{int(self.v2.get())}_{int(self.time)}/{self.label}_{self.cnt}.png',  
                      self.res[:, :, ::-1])  
            self.num += 1  
            if int(self.v2.get()) != 1:  
                if self.num % int(self.v2.get()) == 1:  
                    self.label += 1  
                    self.cnt = 1  
                else:  
                    self.cnt += 1  
            else:  
                self.label += 1  
                self.cnt = 1  
            self.v1.set(f'保存為:{self.label}_{self.cnt}.png')  
            if self.num-1 == int(self.v2.get()) * 10:  
                self.over()  
  
  
    def over(self):  
        if self.isstart:  
            self.cap.release()  
            cv2.destroyAllWindows()  # 關(guān)閉所有窗口  
        self.master.destroy()  
  
    def video(self):  
        if not self.isstart:  
            self.isstart = True  
            t1 = threading.Thread(target=self.open_video)  
            t1.setDaemon(True)  
            t1.start()  
  
    def open_video(self):  
        if not os.path.exists(f'./cache_image'):  
            os.mkdir(f'./cache_image')  
        self.width, self.height = 400, 400  # 設(shè)置拍攝窗口大小  
        self.cap = cv2.VideoCapture(0)  # 開(kāi)攝像頭  
        while True:  
            ret, frame = self.cap.read()  # 讀取攝像頭的內(nèi)容,ret bool 判斷獲取幀  
            self.frame = cv2.flip(frame, 1)  # frame 獲取到的一幀  
            frame1, self.res = binaryMask(frame, (frame.shape[1]-self.width)//2,  
                                              (frame.shape[0]-self.height)//2,  
                                              self.width, self.height)  # 取手勢(shì)所在框圖并進(jìn)行處理  
            io.imsave('./cache_image/abc.gif', (transform.resize(frame1[:, :, ::-1], (  
                400, 400 / frame1.shape[0] * frame1.shape[1], frame1.shape[2])) * 255).astype('uint8'))  
            self.video_1 = PhotoImage(file='./cache_image/abc.gif')  
            self.label01.config(image=self.video_1)  
            temp = fourierDesciptor(self.res)  
            out_line = temp[0]  
            io.imsave('./cache_image/out_line.gif', (transform.resize(out_line[:, :], (  
                400, 400 / out_line.shape[0] * out_line.shape[1])) * 255).astype('uint8'))  
            self.video_2 = PhotoImage(file='./cache_image/out_line.gif')  
            self.label02.config(image=self.video_2)  
  
  
            abc1 = None  
            abc2 = None  
            abc1 = self.video_1  # 作用可以讓視頻不閃爍  
            abc2 = self.video_2  # 作用可以讓視頻不閃爍
  1. 執(zhí)行代碼
if __name__ == '__main__':  
    root = Tk()  
    root.geometry("1100x550")  
    root.title('不就是獲取點(diǎn)圖片數(shù)據(jù)嗎')  
    app = Application(root)  
    root.mainloop()

我們先來(lái)查看一下運(yùn)行效果圖:
certifi @ file:///croot/certifi_1671487769961/work/certifi,python,開(kāi)發(fā)語(yǔ)言,opencv,機(jī)器學(xué)習(xí),sklearn
可以看出我準(zhǔn)備收集每一類(lèi)20張圖片,它會(huì)自動(dòng)將截取圖片數(shù)目加一,從1_1一直到10_20才能截取完畢,截取時(shí)注意背景顏色不要有雨皮膚相近的顏色出現(xiàn),獲取的數(shù)據(jù)會(huì)自動(dòng)保存到當(dāng)前文件夾下的data目錄內(nèi)。

03 第二個(gè)py文件:處理數(shù)據(jù),并訓(xùn)練模型

這里我們處理數(shù)據(jù)還是需要用到剛才的傅里葉描述子的函數(shù),為了不讓你翻回去,我再下面又重新來(lái)了一遍。

  1. 導(dǎo)入所需模塊
from tkinter import *  
import tkinter.filedialog  
import pandas as pd  
from skimage import io  
from sklearn.svm import SVC  
from sklearn.model_selection import GridSearchCV  
import cv2  
import warnings  
import time  
import numpy as np  
import random  
import pickle  
  
warnings.simplefilter('ignore')
  1. 添加獲取傅里葉描述子的函數(shù)
MIN_DESCRIPTOR = 32  # surprisingly enough, 2 descriptors are already enough  
  
##計(jì)算傅里葉描述子  
def fourierDesciptor(res):  
    # Laplacian算子進(jìn)行八鄰域檢測(cè)  
    gray = cv2.cvtColor(res, cv2.COLOR_BGR2GRAY)  
    dst = cv2.Laplacian(gray, cv2.CV_16S, ksize=3)  
    Laplacian = cv2.convertScaleAbs(dst)  
    contour = find_contours(Laplacian)  # 提取輪廓點(diǎn)坐標(biāo)  
    contour_array = contour[0][:, 0, :]  # 注意這里只保留區(qū)域面積最大的輪廓點(diǎn)坐標(biāo)  
    ret_np = np.ones(dst.shape, np.uint8)  # 創(chuàng)建黑色幕布  
    ret = cv2.drawContours(ret_np, contour[0], -1, (255, 255, 255), 1)  # 繪制白色輪廓  
    contours_complex = np.empty(contour_array.shape[:-1], dtype=complex)  
    contours_complex.real = contour_array[:, 0]  # 橫坐標(biāo)作為實(shí)數(shù)部分  
    contours_complex.imag = contour_array[:, 1]  # 縱坐標(biāo)作為虛數(shù)部分  
    fourier_result = np.fft.fft(contours_complex)  # 進(jìn)行傅里葉變換  
    # fourier_result = np.fft.fftshift(fourier_result)  
    descirptor_in_use = truncate_descriptor(fourier_result)  # 截短傅里葉描述子  
    # reconstruct(ret, descirptor_in_use)  
    # return ret, descirptor_in_use    
    return descirptor_in_use  
  
def find_contours(Laplacian):  
    # binaryimg = cv2.Canny(res, 50, 200) #二值化,canny檢測(cè)  
    h_c, h_i = cv2.findContours(Laplacian, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)  # 尋找輪廓  
    contour = sorted(h_c, key=cv2.contourArea, reverse=True)  # 對(duì)一系列輪廓點(diǎn)坐標(biāo)按它們圍成的區(qū)域面積進(jìn)行排序  
    return contour  
  
# 截短傅里葉描述子  
def truncate_descriptor(fourier_result):  
    descriptors_in_use = np.fft.fftshift(fourier_result)  
  
    # 取中間的MIN_DESCRIPTOR項(xiàng)描述子  
    center_index = int(len(descriptors_in_use) / 2)  
    low, high = center_index - int(MIN_DESCRIPTOR / 2), center_index + int(MIN_DESCRIPTOR / 2)  
    descriptors_in_use = descriptors_in_use[low:high]  
  
    descriptors_in_use = np.fft.ifftshift(descriptors_in_use)  
    return descriptors_in_use  
  
##由傅里葉描述子重建輪廓圖  
def reconstruct(img, descirptor_in_use):  
    # descirptor_in_use = truncate_descriptor(fourier_result, degree)  
    # descirptor_in_use = np.fft.ifftshift(fourier_result)    
    # descirptor_in_use = truncate_descriptor(fourier_result)    
    # print(descirptor_in_use)    
    contour_reconstruct = np.fft.ifft(descirptor_in_use)  
    contour_reconstruct = np.array([contour_reconstruct.real,  
                                    contour_reconstruct.imag])  
    contour_reconstruct = np.transpose(contour_reconstruct)  
    contour_reconstruct = np.expand_dims(contour_reconstruct, axis=1)  
    if contour_reconstruct.min() < 0:  
        contour_reconstruct -= contour_reconstruct.min()  
    contour_reconstruct *= img.shape[0] / contour_reconstruct.max()  
    contour_reconstruct = contour_reconstruct.astype(np.int32, copy=False)  
  
    black_np = np.ones(img.shape, np.uint8)  # 創(chuàng)建黑色幕布  
    black = cv2.drawContours(black_np, contour_reconstruct, -1, (255, 255, 255), 1)  # 繪制白色輪廓  
    # cv2.imshow("contour_reconstruct", black)  
    # cv2.imwrite('recover.png',black)    
    return black
  1. 添加基于tkinterGUI的類(lèi),類(lèi)中通過(guò)一些cv2中的模塊對(duì)圖像數(shù)據(jù)進(jìn)行了旋轉(zhuǎn)和翻轉(zhuǎn)形式的數(shù)據(jù)增強(qiáng)操作,數(shù)據(jù)量乘了個(gè)25,然后通過(guò)傅里葉描述子獲取每一個(gè)圖片的特征數(shù)據(jù),最后導(dǎo)入SVM模型進(jìn)行訓(xùn)練得出模型
# 創(chuàng)建獲取樣本的類(lèi)  
class Application(Frame):  
    def __init__(self, master=None):  
        super().__init__(master)  
        self.master = master  
        self.pack()  
        # 一些默認(rèn)值  
        self.num = 20  
        self.more_data_path = ''  
        self.origin_data_path = ''  
        self.save_features_path = ''  
        self.SVR_model_path = ''  
        self.train_percent = 4 / 5  
        self.time = time.time()  
  
        self.create_widget()  
  
    def create_widget(self):  
        # 添加組件  
        # ?》需要詢問(wèn)獲取原圖片的保存位置和處理后的圖片的放置位置  
        self.button_select_origin = Button(master=self.master, text='原數(shù)據(jù)路徑:', command=self.button_select_origin)  
        self.button_select_origin.place(x=50, y=60)  
        self.v_select_origin = StringVar(self.master, self.origin_data_path)  
        self.entry_select_origin = Entry(master=self.master, textvariable=self.v_select_origin)  
        self.entry_select_origin.place(x=130, y=60)  
  
        self.button_select_to = Button(master=self.master, text='新數(shù)據(jù)路徑:', command=self.button_select_to)  
        self.button_select_to.place(x=50, y=90)  
        self.v_select_to = StringVar(self.master, self.more_data_path)  
        self.entry_select_to = Entry(master=self.master, textvariable=self.v_select_to)  
        self.entry_select_to.place(x=130, y=90)  
  
        # 設(shè)置每個(gè)類(lèi)別的單類(lèi)數(shù)目  
        self.label_num = Label(master=self.master, text='單個(gè)類(lèi)圖片個(gè)數(shù):')  
        self.label_num.place(x=10, y=1)  
        self.v1 = StringVar(self.master, f"{self.num}")  
        self.entry = Entry(master=self.master, textvariable=self.v1)  
        self.entry.place(x=110, y=1)  
  
        self.button01 = Button(master=self.master, text='1.擴(kuò)充數(shù)據(jù)', command=self.get_more_data)  
        self.button01.place(x=50, y=25)  
        self.button02 = Button(master=self.master, text='關(guān)閉程序', command=self.over)  
        self.button02.place(x=920, y=25)  
  
        # ?》需要詢問(wèn)放置特征數(shù)據(jù)的位置  
        self.button_select_feature = Button(master=self.master, text='特征數(shù)據(jù)保存路徑:',  
                                            command=self.button_select_feature)  
        self.button_select_feature.place(x=280, y=60)  
        self.v_select_feature = StringVar(self.master, self.save_features_path)  
        self.entry_select_feature = Entry(master=self.master, textvariable=self.v_select_feature)  
        self.entry_select_feature.place(x=395, y=60)  
        self.button03 = Button(master=self.master, text='2.獲取特征數(shù)據(jù)', command=self.get_feature_data)  
        self.button03.place(x=280, y=25)  
  
        # ?》需要詢問(wèn)保存模型的位置  
        self.button_select_model = Button(master=self.master, text='模型保存路徑:',  
                                          command=self.button_select_model)  
        self.button_select_model.place(x=550, y=60)  
        self.v_select_model = StringVar(self.master, self.SVR_model_path)  
        self.entry_select_model = Entry(master=self.master, textvariable=self.v_select_model)  
        self.entry_select_model.place(x=640, y=60)  
        self.button04 = Button(master=self.master, text='3.訓(xùn)練模型', command=self.train_model)  
        self.button04.place(x=550, y=25)  
  
        self.v_label_thing = StringVar(self.master, '狀態(tài)')  
        self.label_thing = Label(master=self.master, textvariable=self.v_label_thing, fg='red')  
        self.label_thing.place(x=450, y=120)  
  
    def button_select_model(self):  
        self.v_select_model.set(tkinter.filedialog.askdirectory())  
  
    def button_select_feature(self):  
        self.v_select_feature.set(tkinter.filedialog.askdirectory())  
  
    def button_select_to(self):  
        self.v_select_to.set(tkinter.filedialog.askdirectory())  
  
    def button_select_origin(self):  
        self.v_select_origin.set(tkinter.filedialog.askdirectory())  
  
    def access_bar(self, thing):  
        self.v_label_thing.set(thing)  
  
    def train_model(self):  
        if self.entry_select_to.get().strip() and self.v_select_feature.get().strip() and self.entry_select_origin.get().strip() and self.v_select_model.get().strip():  
            # 分割測(cè)試集和訓(xùn)練集  
            data_index = np.array(range(1, int(self.v1.get()) * 25 + 1))  
            train_index = data_index[:int(int(self.v1.get()) * 25 * self.train_percent) + 1]  
            test_index = data_index[int(int(self.v1.get()) * 25 * self.train_percent) + 1:]  
            np.random.shuffle(data_index)  
            # 獲取全部數(shù)據(jù)  
            # print(self.v_select_feature.get() + '/' + f'1_1.txt')  
            fp = open(self.v_select_feature.get() + '/' + f'1_1.txt', 'r')  
            cols = len(fp.read().strip().split(' '))  
            fp.close()  
            print(f"一共有{cols}個(gè)特征")  
            df_train = pd.DataFrame(columns=range(cols + 1))  
            df_test = pd.DataFrame(columns=range(cols + 1))  
            for i in range(1, 11):  
                for j in range(train_index.shape[0]):  
                    fp = open(self.v_select_feature.get() + '/' + f'{i}_{train_index[j]}.txt', 'r')  
                    lst = fp.read().strip().split(' ')  
                    fp.close()  
                    lst = list(map(int, lst))  
                    lst.append(i)  # 加入標(biāo)簽  
                    df_train.loc[(i - 1) * 10 + j] = np.array(lst, dtype='float64')  
            for i in range(1, 11):  
                for j in range(test_index.shape[0]):  
                    fp = open(self.v_select_feature.get() + '/' + f'{i}_{test_index[j]}.txt', 'r')  
                    lst = fp.read().strip().split(' ')  
                    fp.close()  
                    lst = list(map(int, lst))  
                    lst.append(i)  # 加入標(biāo)簽  
                    df_test.loc[(i - 1) * 10 + j] = np.array(lst, dtype='float64')  
  
            # print(f"全部的數(shù)據(jù):{df_train},{df_test}")  
  
            def tran_SVM():  
                # 網(wǎng)格搜索獲取最佳模型  
                svc = SVC()  
                parameters = {'kernel': ('linear', 'rbf'),  
                              'C': [1, 3, 5, 7, 9, 11, 13, 15, 17, 19],  
                              'gamma': [0.00001, 0.0001, 0.001, 0.1, 1, 10, 100, 1000]}  # 預(yù)設(shè)置一些參數(shù)值  
  
                clf = GridSearchCV(svc, parameters, cv=5, n_jobs=8)  # 網(wǎng)格搜索法,設(shè)置5-折交叉驗(yàn)證  
                clf.fit(df_train.iloc[:, :-1], df_train.iloc[:, -1])  
                print(clf.return_train_score)  
                print(clf.best_params_)  # 打印出最好的結(jié)果  
                best_model = clf.best_estimator_  
                print("SVM Model save...")  
                save_path = self.v_select_model.get() + f"/svm_model_{int(self.time)}.m"  
                fp = open(save_path, 'wb')  
                pickle.dump(best_model, fp)  # 保存最好的模型  
                fp.close()  
  
            def test_SVM(clf):  
                valTest = clf.predict(df_test.iloc[:, :-1])  
                errorCount = np.sum(valTest != df_test.iloc[:, -1])  # 記錄錯(cuò)誤個(gè)數(shù)  
                print("總共錯(cuò)了%d個(gè)數(shù)據(jù)\n錯(cuò)誤率為%.2f%%" % (errorCount, errorCount / df_test.shape[0] * 100))  
  
            # 訓(xùn)練 + 驗(yàn)證  
            tran_SVM()  
            fp = open(self.v_select_model.get() + f"/svm_model_{int(self.time)}.m", 'rb')  
            clf = pickle.load(fp)  
            fp.close()  
            test_SVM(clf)  
            self.access_bar("自動(dòng)訓(xùn)練完成")  
  
    def get_feature_data(self):  
        if self.entry_select_to.get().strip() and self.v_select_feature.get().strip() and self.entry_select_origin.get().strip():  
            for i in range(1, 11):  
                for j in range(1, int(self.v1.get()) * 25 + 1):  
                    roi = io.imread(self.entry_select_to.get() + '/' + str(i) + '_' + str(j) + '.png')[:, :, ::-1]  
                    descirptor_in_use = abs(fourierDesciptor(roi))  
                    # print(descirptor_in_use)  
                    fd_name = self.v_select_feature.get() + '/' + str(i) + '_' + str(j) + '.txt'  
                    with open(fd_name, 'w', encoding='utf-8') as f:  
                        temp = descirptor_in_use[1]  
                        for k in range(1, len(descirptor_in_use)):  
                            x_record = int(100 * descirptor_in_use[k] / temp)  
                            f.write(str(x_record))  
                            f.write(' ')  
                        f.write('\n')  
            self.access_bar("獲取特征完成")  
  
    def get_more_data(self):  
        if self.entry_select_to.get().strip() and self.entry_select_origin.get().strip():  
            # 旋轉(zhuǎn)  
            def rotate(image, scale=0.9):  
                angle = random.randrange(-90, 90)  # 隨機(jī)角度  
                w = image.shape[1]  
                h = image.shape[0]  
                # rotate matrix  
                M = cv2.getRotationMatrix2D((w / 2, h / 2), angle, scale)  
                # rotate  
                image = cv2.warpAffine(image, M, (w, h))  
                return image  
  
            for i in range(1, 11):  
                cnt = int(self.v1.get()) + 1  # 計(jì)數(shù)  
                for j in range(1, int(self.v1.get()) + 1):  
                    roi = io.imread(self.v_select_origin.get() + '/' + str(i) + '_' + str(j) + '.png')[:, :, ::-1]  
                    # print(self.v_select_origin.get() + '/' + str(i) + '_' + str(j) + '.png')  
                    io.imsave(self.v_select_to.get() + '/' + str(i) + '_' + str(j) + '.png', roi[:, :, ::-1])  # 拿出來(lái)的圖片先復(fù)制一份  
                    for k in range(12):  
                        img_rotation = rotate(roi)  # 旋轉(zhuǎn)  
                        io.imsave(self.v_select_to.get() + '/' + str(i) + '_' + str(cnt) + '.png', img_rotation[:, :, ::-1])  
                        cnt += 1  
                        img_flip = cv2.flip(img_rotation, 1)  # 翻轉(zhuǎn)  
                        io.imsave(self.v_select_to.get() + '/' + str(i) + '_' + str(cnt) + '.png', img_flip[:, :, ::-1])  
                        cnt += 1  
            self.access_bar("數(shù)據(jù)擴(kuò)充完成")  
  
    def over(self):  
        self.master.destroy()
  1. 執(zhí)行代碼
if __name__ == '__main__':  
    root = Tk()  
    root.geometry("1000x200")  
    root.title('不就是訓(xùn)練模型嗎')  
    app = Application(root)  
    root.mainloop()

查看運(yùn)行效果:
certifi @ file:///croot/certifi_1671487769961/work/certifi,python,開(kāi)發(fā)語(yǔ)言,opencv,機(jī)器學(xué)習(xí),sklearn
我們需要在左上角填入與獲取數(shù)據(jù)添的類(lèi)別數(shù)相同的值,然后從左到右依次執(zhí)行,執(zhí)行每一步操作之前還要注意將下面的路徑獲取完成,運(yùn)行過(guò)程中全部狀態(tài)都會(huì)在下面顯示,只要不報(bào)錯(cuò)就不用急,有可能是數(shù)據(jù)較多需要等待一下。

04 第三個(gè)py文件:使用模型進(jìn)行預(yù)測(cè)

最后就是使用上一步訓(xùn)練出來(lái)的模型進(jìn)行預(yù)測(cè)的操作,所以在獲取數(shù)據(jù)時(shí)使用的傅里葉描述子函數(shù)和皮膚檢測(cè)函數(shù)都要重新拿過(guò)來(lái)使用

  1. 導(dǎo)入模塊
from tkinter import *  
import tkinter.filedialog  
import numpy as np  
from skimage import io, transform  
import threading  
import pickle  
import cv2  
import os  
import warnings  
  
warnings.simplefilter('ignore')
  1. 添加傅里葉描述子函數(shù)和皮膚檢測(cè)函數(shù),功能和之前一樣
MIN_DESCRIPTOR = 32  # surprisingly enough, 2 descriptors are already enough  
  
##計(jì)算傅里葉描述子  
def fourierDesciptor(res):  
    # Laplacian算子進(jìn)行八鄰域檢測(cè)  
    gray = cv2.cvtColor(res, cv2.COLOR_BGR2GRAY)  
    dst = cv2.Laplacian(gray, cv2.CV_16S, ksize=3)  
    Laplacian = cv2.convertScaleAbs(dst)  
    contour = find_contours(Laplacian)  # 提取輪廓點(diǎn)坐標(biāo)  
    contour_array = contour[0][:, 0, :]  # 注意這里只保留區(qū)域面積最大的輪廓點(diǎn)坐標(biāo)  
    ret_np = np.ones(dst.shape, np.uint8)  # 創(chuàng)建黑色幕布  
    ret = cv2.drawContours(ret_np, contour[0], -1, (255, 255, 255), 1)  # 繪制白色輪廓  
    contours_complex = np.empty(contour_array.shape[:-1], dtype=complex)  
    contours_complex.real = contour_array[:, 0]  # 橫坐標(biāo)作為實(shí)數(shù)部分  
    contours_complex.imag = contour_array[:, 1]  # 縱坐標(biāo)作為虛數(shù)部分  
    fourier_result = np.fft.fft(contours_complex)  # 進(jìn)行傅里葉變換  
    # fourier_result = np.fft.fftshift(fourier_result)  
    descirptor_in_use = truncate_descriptor(fourier_result)  # 截短傅里葉描述子  
    # reconstruct(ret, descirptor_in_use)  
    return ret, descirptor_in_use  
    # return descirptor_in_use  
  
def find_contours(Laplacian):  
    #binaryimg = cv2.Canny(res, 50, 200) #二值化,canny檢測(cè)  
    h_c,h_i= cv2.findContours(Laplacian,cv2.RETR_EXTERNAL,cv2.CHAIN_APPROX_NONE) #尋找輪廓  
    contour = sorted(h_c,key=cv2.contourArea, reverse=True)#對(duì)一系列輪廓點(diǎn)坐標(biāo)按它們圍成的區(qū)域面積進(jìn)行排序  
    return contour  
  
# 截短傅里葉描述子  
def truncate_descriptor(fourier_result):  
    descriptors_in_use = np.fft.fftshift(fourier_result)  
  
    # 取中間的MIN_DESCRIPTOR項(xiàng)描述子  
    center_index = int(len(descriptors_in_use) / 2)  
    low, high = center_index - int(MIN_DESCRIPTOR / 2), center_index + int(MIN_DESCRIPTOR / 2)  
    descriptors_in_use = descriptors_in_use[low:high]  
  
    descriptors_in_use = np.fft.ifftshift(descriptors_in_use)  
    return descriptors_in_use  
  
  
##由傅里葉描述子重建輪廓圖  
def reconstruct(img, descirptor_in_use):  
    # descirptor_in_use = truncate_descriptor(fourier_result, degree)  
    # descirptor_in_use = np.fft.ifftshift(fourier_result)    
    # descirptor_in_use = truncate_descriptor(fourier_result)    
    # print(descirptor_in_use)    
    contour_reconstruct = np.fft.ifft(descirptor_in_use)  
    contour_reconstruct = np.array([contour_reconstruct.real,  
                                    contour_reconstruct.imag])  
    contour_reconstruct = np.transpose(contour_reconstruct)  
    contour_reconstruct = np.expand_dims(contour_reconstruct, axis=1)  
    if contour_reconstruct.min() < 0:  
        contour_reconstruct -= contour_reconstruct.min()  
    contour_reconstruct *= img.shape[0] / contour_reconstruct.max()  
    contour_reconstruct = contour_reconstruct.astype(np.int32, copy=False)  
  
    black_np = np.ones(img.shape, np.uint8)  # 創(chuàng)建黑色幕布  
    black = cv2.drawContours(black_np, contour_reconstruct, -1, (255, 255, 255), 10)  # 繪制白色輪廓  
    # cv2.imshow("contour_reconstruct", black)  
    # cv2.imwrite('recover.png',black)    
    return black  
  
def binaryMask(frame, x0, y0, width, height):  
    frame1 = cv2.rectangle(frame, (x0, y0), (x0 + width, y0 + height), (0, 255, 0))  # 畫(huà)出截取的手勢(shì)框圖  
    roi = frame[y0:y0 + height, x0:x0 + width]  # roi=手勢(shì)框圖  
    # cv2.imshow("roi", roi)  # 顯示手勢(shì)框圖  
    res = skinMask(roi)  # 進(jìn)行膚色檢測(cè)  
    kernel = np.ones((5, 5), np.uint8)  # 設(shè)置卷積核  
    erosion = cv2.erode(res, kernel)  # 腐蝕操作  
    res = cv2.dilate(erosion, kernel)  # 對(duì)res膨脹操作 dilation    
    # ret, fourier_result = fd.fourierDesciptor(res)    
    return frame1, res  
    # return frame1, roi, res, ret, fourier_result  
    # cv2.imshow("res", res)  # res是roi顯示膚色檢測(cè)后的圖像  
  
  
def skinMask(roi):  
    YCrCb = cv2.cvtColor(roi, cv2.COLOR_BGR2YCR_CB)  # 轉(zhuǎn)換至YCrCb空間  
    (y, cr, cb) = cv2.split(YCrCb)  # 拆分出Y,Cr,Cb值  
    cr1 = cv2.GaussianBlur(cr, (5, 5), 0)  
    _, skin = cv2.threshold(cr1, 0, 255, cv2.THRESH_BINARY + cv2.THRESH_OTSU)  # Ostu處理  
    res = cv2.bitwise_and(roi, roi, mask=skin)  
    return res
  1. 添加基于tkinterGUI的類(lèi),類(lèi)中主要使用多線程來(lái)展示相機(jī)拍攝到的視頻,通過(guò)截取圖像進(jìn)行預(yù)測(cè)
# 創(chuàng)建類(lèi)  
class Application(Frame):  
    def __init__(self, master=None):  
        super().__init__(master)  
        self.master = master  
        self.pack()  
        self.isstart = False  
        self.true = 0  
        self.fail = 0  
        self.new = False  
        self.cut = False  
        self.create_widget()  
  
    def create_widget(self):  
        # 添加組件  
        self.button01 = Button(master=self.master, text='開(kāi)啟攝像頭', command=self.video)  
        self.button01.place(x=180, y=470 + 10)  
        self.button001 = Button(master=self.master, text='關(guān)閉程序', command=self.over)  
        self.button001.place(x=270, y=470 + 10)  
        self.button_select_model = Button(master=self.master, text='選擇模型', command=self.button_select_model)  
        self.button_select_model.place(x=500, y=480)  
        self.label01 = Label(master=self.master)  
        self.label01.place(x=30, y=50)  
  
        self.button02 = Button(master=self.master, text='截取圖像', command=self.image)  
        self.button02.place(x=680, y=470 + 10)  
        self.button03 = Button(master=self.master, text='原始圖像', command=self.change_origin)  
        self.button03.place(x=680 + 70, y=470 + 10)  
        self.button04 = Button(master=self.master, text='輪廓圖像', command=self.change_outline)  
        self.button04.place(x=680 + 140, y=470 + 10)  
        self.label00 = Label(master=self.master)  
        self.label00.place(x=630, y=50)  
        self.v1 = StringVar(self.master, "預(yù)測(cè)結(jié)果")  
        self.label02 = Label(master=self.master, textvariable=self.v1)  
        self.label02.place(x=680 + 210, y=470 + 10)  
        self.v2 = StringVar(self.master, "正確率")  
        self.label03 = Label(master=self.master, textvariable=self.v2)  
        self.label03.place(x=780 + 280, y=470 + 10)  
        self.button05 = Button(master=self.master, text='正確', command=self.judge_true)  
        self.button05.place(x=850 + 140, y=450 + 10)  
        self.button06 = Button(master=self.master, text='錯(cuò)誤', command=self.judge_fail)  
        self.button06.place(x=850 + 140, y=480 + 10)  
  
    def button_select_model(self):  
        fp = open(tkinter.filedialog.askopenfilename(), 'rb')  
        self.model = pickle.load(fp)  
        fp.close()  
  
    def over(self):  
        if self.isstart:  
            self.cap.release()  
            cv2.destroyAllWindows()  # 關(guān)閉所有窗口  
        self.master.destroy()  
  
    def change_origin(self):  
        if self.isstart and self.cut:  
            self.label00.config(image=self.img)  
  
    def change_outline(self):  
        if self.isstart and self.cut:  
            self.label00.config(image=self.outline)  
  
    def judge_true(self):  
        if self.isstart and self.new:  
            self.new = False  
            self.true += 1  
            self.v2.set(f"正確率:{self.true / (self.true + self.fail):.2%}")  
  
    def judge_fail(self):  
        if self.isstart and self.new:  
            self.new = False  
            self.fail += 1  
            self.v2.set(f"正確率:{self.true / (self.true + self.fail):.2%}")  
  
    def image(self):  
        if self.isstart:  
            if not os.path.exists(f'./cache_image'):  
                os.mkdir(f'./cache_image')  
            self.new = True  
            self.cut = True  
            frame1, res = binaryMask(self.frame, (self.frame.shape[1] - self.width) // 2,  
                                     (self.frame.shape[0] - self.height) // 2, self.width,  
                                     self.height)  # 取手勢(shì)所在框圖并進(jìn)行處理  
            io.imsave('./cache_image/img.gif', (transform.resize(frame1[:, :, ::-1], (  
                400, 400 / frame1.shape[0] * frame1.shape[1], frame1.shape[2])) * 255).astype('uint8'))  
            self.img = PhotoImage(file='./cache_image/img.gif')  
            self.label00.config(image=self.img)  
  
            temp = fourierDesciptor(res)  
            out_line = temp[0]  
            io.imsave('./cache_image/out_line.gif', (transform.resize(out_line[:, :], (  
                400, 400 / out_line.shape[0] * out_line.shape[1])) * 255).astype('uint8'))  
            self.outline = PhotoImage(file='./cache_image/out_line.gif')  
  
            descirptor_in_use = abs(temp[1])  
            temp = descirptor_in_use[1]  
            X_test = []  
            for k in range(1, len(descirptor_in_use)):  
                x_record = int(100 * descirptor_in_use[k] / temp)  
                X_test.append(x_record)  
            X_test = np.array(X_test)  
            pred = self.model.predict(X_test.reshape(1, -1))  
            self.v1.set(f"預(yù)測(cè)結(jié)果:{int(pred[0])}")  
  
    def video(self):  
        if not self.isstart:  
            self.isstart = True  
            t1 = threading.Thread(target=self.open_video)  
            t1.setDaemon(True)  
            t1.start()  
  
    def open_video(self):  
        self.width, self.height = 400, 400  # 設(shè)置拍攝窗口大小  
        self.cap = cv2.VideoCapture(0)  # 開(kāi)攝像頭  
        while True:  
            ret, frame = self.cap.read()  # 讀取攝像頭的內(nèi)容,ret bool 判斷獲取幀  
            self.frame = cv2.flip(frame, 1)  # frame 獲取到的一幀  
            frame1 = cv2.rectangle(self.frame,  
                                   ((frame.shape[1] - self.width) // 2, (frame.shape[0] - self.height) // 2), (  
                                       (frame.shape[1] - self.width) // 2 + self.width,  
                                       (frame.shape[0] - self.height) // 2 + self.height),  
                                   (0, 255, 0))  
            io.imsave('./cache_image/abc.gif', (transform.resize(frame1[:, :, ::-1], (  
                400, 400 / frame1.shape[0] * frame1.shape[1], frame1.shape[2])) * 255).astype('uint8'))  
            self.video_ = PhotoImage(file='./cache_image/abc.gif')  
            self.label01.config(image=self.video_)  
            abc = None  
            abc = self.video_  # 作用可以讓視頻不閃爍
  1. 執(zhí)行程序:
if __name__ == '__main__':  
    root = Tk()  
    root.geometry("1200x550")  
    root.title('不就是一個(gè)手勢(shì)識(shí)別嗎')  
    app = Application(root)  
    root.mainloop()

查看執(zhí)行效果:
certifi @ file:///croot/certifi_1671487769961/work/certifi,python,開(kāi)發(fā)語(yǔ)言,opencv,機(jī)器學(xué)習(xí),sklearn
預(yù)測(cè)之前需要先選擇訓(xùn)練好的模型,然后可以看出我截取的這張圖片成功預(yù)測(cè)出了結(jié)果,你還可以點(diǎn)擊右側(cè)的正確與錯(cuò)誤進(jìn)行統(tǒng)計(jì)準(zhǔn)確率。

05 結(jié)尾

整個(gè)項(xiàng)目就結(jié)束了,上面的這么完整人性化的項(xiàng)目,你懂得。

參考文章:http://t.csdn.cn/4kXbQ
參考文章:http://t.csdn.cn/YIgvZ文章來(lái)源地址http://www.zghlxwxcb.cn/news/detail-765381.html

到了這里,關(guān)于課程設(shè)計(jì)——基于opencv的手勢(shì)識(shí)別【真】完整項(xiàng)目的文章就介紹完了。如果您還想了解更多內(nèi)容,請(qǐng)?jiān)谟疑辖撬阉鱐OY模板網(wǎng)以前的文章或繼續(xù)瀏覽下面的相關(guān)文章,希望大家以后多多支持TOY模板網(wǎng)!

本文來(lái)自互聯(lián)網(wǎng)用戶投稿,該文觀點(diǎn)僅代表作者本人,不代表本站立場(chǎng)。本站僅提供信息存儲(chǔ)空間服務(wù),不擁有所有權(quán),不承擔(dān)相關(guān)法律責(zé)任。如若轉(zhuǎn)載,請(qǐng)注明出處: 如若內(nèi)容造成侵權(quán)/違法違規(guī)/事實(shí)不符,請(qǐng)點(diǎn)擊違法舉報(bào)進(jìn)行投訴反饋,一經(jīng)查實(shí),立即刪除!

領(lǐng)支付寶紅包贊助服務(wù)器費(fèi)用

相關(guān)文章

  • opencv實(shí)戰(zhàn)項(xiàng)目 手勢(shì)識(shí)別-手勢(shì)音量控制(opencv)

    opencv實(shí)戰(zhàn)項(xiàng)目 手勢(shì)識(shí)別-手勢(shì)音量控制(opencv)

    ?本項(xiàng)目是使用了谷歌開(kāi)源的框架mediapipe,里面有非常多的模型提供給我們使用,例如面部檢測(cè),身體檢測(cè),手部檢測(cè)等。 手勢(shì)識(shí)別系列文章 1.opencv實(shí)現(xiàn)手部追蹤(定位手部關(guān)鍵點(diǎn)) 2.opencv實(shí)戰(zhàn)項(xiàng)目 實(shí)現(xiàn)手勢(shì)跟蹤并返回位置信息(封裝調(diào)用) 3.手勢(shì)識(shí)別-手勢(shì)音量控制(open

    2024年02月12日
    瀏覽(18)
  • opencv實(shí)戰(zhàn)項(xiàng)目 手勢(shì)識(shí)別-手勢(shì)控制鼠標(biāo)

    opencv實(shí)戰(zhàn)項(xiàng)目 手勢(shì)識(shí)別-手勢(shì)控制鼠標(biāo)

    手勢(shì)識(shí)別是一種人機(jī)交互技術(shù),通過(guò)識(shí)別人的手勢(shì)動(dòng)作,從而實(shí)現(xiàn)對(duì)計(jì)算機(jī)、智能手機(jī)、智能電視等設(shè)備的操作和控制。 1.? opencv實(shí)現(xiàn)手部追蹤(定位手部關(guān)鍵點(diǎn)) 2.opencv實(shí)戰(zhàn)項(xiàng)目 實(shí)現(xiàn)手勢(shì)跟蹤并返回位置信息(封裝調(diào)用) 3.手勢(shì)識(shí)別-手勢(shì)音量控制(opencv) 4.opencv實(shí)戰(zhàn)項(xiàng)目

    2024年02月13日
    瀏覽(23)
  • opencv實(shí)戰(zhàn)項(xiàng)目 手勢(shì)識(shí)別-手勢(shì)控制鍵盤(pán)

    opencv實(shí)戰(zhàn)項(xiàng)目 手勢(shì)識(shí)別-手勢(shì)控制鍵盤(pán)

    手勢(shì)識(shí)別是一種人機(jī)交互技術(shù),通過(guò)識(shí)別人的手勢(shì)動(dòng)作,從而實(shí)現(xiàn)對(duì)計(jì)算機(jī)、智能手機(jī)、智能電視等設(shè)備的操作和控制。 1.? opencv實(shí)現(xiàn)手部追蹤(定位手部關(guān)鍵點(diǎn)) 2.opencv實(shí)戰(zhàn)項(xiàng)目 實(shí)現(xiàn)手勢(shì)跟蹤并返回位置信息(封裝調(diào)用) 3.opencv實(shí)戰(zhàn)項(xiàng)目 手勢(shì)識(shí)別-手勢(shì)控制鼠標(biāo) 4.opencv實(shí)戰(zhàn)

    2024年02月13日
    瀏覽(23)
  • 基于opencv的手勢(shì)識(shí)別

    基于opencv的手勢(shì)識(shí)別

    大家好,我是一名本科生,我的主要學(xué)習(xí)方向是計(jì)算機(jī)視覺(jué)以及人工智能。按照目前的學(xué)習(xí)進(jìn)度來(lái)說(shuō),我就是一小白,在這里寫(xiě)下自己編寫(xiě)的程序,與大家分享,記錄一下自己的成長(zhǎng)。 思路分析 獲取圖片,在圖片中找到手,然后進(jìn)行一系列的閉運(yùn)算,降噪平滑處理,輪廓查

    2024年02月03日
    瀏覽(33)
  • 競(jìng)賽項(xiàng)目 深度學(xué)習(xí)手勢(shì)識(shí)別算法實(shí)現(xiàn) - opencv python

    競(jìng)賽項(xiàng)目 深度學(xué)習(xí)手勢(shì)識(shí)別算法實(shí)現(xiàn) - opencv python

    ?? 優(yōu)質(zhì)競(jìng)賽項(xiàng)目系列,今天要分享的是 ?? 深度學(xué)習(xí)手勢(shì)識(shí)別算法實(shí)現(xiàn) - opencv python 該項(xiàng)目較為新穎,適合作為競(jìng)賽課題方向,學(xué)長(zhǎng)非常推薦! ??學(xué)長(zhǎng)這里給一個(gè)題目綜合評(píng)分(每項(xiàng)滿分5分) 難度系數(shù):3分 工作量:3分 創(chuàng)新點(diǎn):4分 ?? 更多資料, 項(xiàng)目分享: https://gitee.com/

    2024年02月13日
    瀏覽(160)
  • 基于Mediapipe的Python手勢(shì)識(shí)別項(xiàng)目(手勢(shì)識(shí)別游戲)附項(xiàng)目源碼

    基于Mediapipe的Python手勢(shì)識(shí)別項(xiàng)目(手勢(shì)識(shí)別游戲)附項(xiàng)目源碼

    基于Mediapipe的手勢(shì)識(shí)別,完成的手勢(shì)識(shí)別游戲。運(yùn)行效果圖如下: 首先是初始的界面效果圖: 游戲規(guī)則:屏幕上會(huì)出現(xiàn)蚊子和蜜蜂,當(dāng)手蜷曲握起時(shí),表示抓的動(dòng)作。如果抓到右邊移動(dòng)的蚊子,則會(huì)增加分?jǐn)?shù),如果抓到右邊的蜜蜂,則會(huì)出現(xiàn)被蟄到的聲音 :) 調(diào)用Mediapip

    2024年02月11日
    瀏覽(114)
  • 基于OpenCV的簡(jiǎn)易實(shí)時(shí)手勢(shì)識(shí)別(含代碼)

    基于OpenCV的簡(jiǎn)易實(shí)時(shí)手勢(shì)識(shí)別(含代碼)

    這是我大一寒假時(shí)寫(xiě)著玩的,非常簡(jiǎn)陋?;谕拱鼨z測(cè),所以實(shí)際上是計(jì)算指尖數(shù)量判斷1~5的手勢(shì)。又為1 ~3手勢(shì)賦了控制鼠標(biāo)操作的功能(但不能移動(dòng)鼠標(biāo),而且因?yàn)槭謩?shì)識(shí)別不太準(zhǔn)確所以這個(gè)功能實(shí)現(xiàn)得很廢/doge)。(才疏學(xué)淺,希望有生之年能寫(xiě)個(gè)更好的 版本信息:Vi

    2024年02月03日
    瀏覽(22)
  • 基于opencv-mediapipe的手勢(shì)識(shí)別

    基于opencv-mediapipe的手勢(shì)識(shí)別

    上一篇文章介紹了基于opencv的手勢(shì)識(shí)別,如果大家運(yùn)行了我的代碼,會(huì)發(fā)現(xiàn)代碼中找出手部輪廓的效果不是很理想。當(dāng)時(shí)我在網(wǎng)上找尋解決的辦法,剛好找到了mediapip庫(kù),然后我就利用opencv和mediapipe這兩個(gè)庫(kù)重新進(jìn)行了手勢(shì)識(shí)別的代碼編寫(xiě)。效果還不錯(cuò),寫(xiě)篇文章記錄一下。

    2024年02月09日
    瀏覽(16)
  • 基于opencv + cnn + PIL的手勢(shì)識(shí)別系統(tǒng)

    基于opencv + cnn + PIL的手勢(shì)識(shí)別系統(tǒng)

    涉及技術(shù)棧:opencv + cnn + PIL 網(wǎng)絡(luò)訓(xùn)練算法流程( train ing .py) 圖像讀取及預(yù)處理 本實(shí)驗(yàn)采用PLL庫(kù)里的open函數(shù)完成圖片的讀取工作,用resize函數(shù)將圖像的尺寸變?yōu)榻y(tǒng)一值。為減少卷積操作的計(jì)算量,將圖像做歸一化處理,將圖像的像素值變?yōu)閇0,1]之間。 2,編碼標(biāo)簽 將訓(xùn)練集和測(cè)

    2024年02月04日
    瀏覽(28)
  • 基于OpenCV的手勢(shì)1~5識(shí)別系統(tǒng)(源碼&環(huán)境部署)

    基于OpenCV的手勢(shì)1~5識(shí)別系統(tǒng)(源碼&環(huán)境部署)

    項(xiàng)目參考AAAI Association for the Advancement of Artificial Intelligence 研究背景與意義: 隨著計(jì)算機(jī)視覺(jué)技術(shù)的快速發(fā)展,手勢(shì)識(shí)別系統(tǒng)在人機(jī)交互、虛擬現(xiàn)實(shí)、智能監(jiān)控等領(lǐng)域得到了廣泛應(yīng)用。手勢(shì)識(shí)別系統(tǒng)可以通過(guò)分析人體的手勢(shì)動(dòng)作,實(shí)現(xiàn)與計(jì)算機(jī)的自然交互,提高用戶體驗(yàn)和操

    2024年02月04日
    瀏覽(23)

覺(jué)得文章有用就打賞一下文章作者

支付寶掃一掃打賞

博客贊助

微信掃一掃打賞

請(qǐng)作者喝杯咖啡吧~博客贊助

支付寶掃一掃領(lǐng)取紅包,優(yōu)惠每天領(lǐng)

二維碼1

領(lǐng)取紅包

二維碼2

領(lǐng)紅包