国产 无码 综合区,色欲AV无码国产永久播放,无码天堂亚洲国产AV,国产日韩欧美女同一区二区

基于Mediapipe的Python手勢(shì)識(shí)別項(xiàng)目(手勢(shì)識(shí)別游戲)附項(xiàng)目源碼

這篇具有很好參考價(jià)值的文章主要介紹了基于Mediapipe的Python手勢(shì)識(shí)別項(xiàng)目(手勢(shì)識(shí)別游戲)附項(xiàng)目源碼。希望對(duì)大家有所幫助。如果存在錯(cuò)誤或未考慮完全的地方,請(qǐng)大家不吝賜教,您也可以點(diǎn)擊"舉報(bào)違法"按鈕提交疑問(wèn)。

運(yùn)行效果

基于Mediapipe的手勢(shì)識(shí)別,完成的手勢(shì)識(shí)別游戲。運(yùn)行效果圖如下:

首先是初始的界面效果圖:

基于Mediapipe的Python手勢(shì)識(shí)別項(xiàng)目(手勢(shì)識(shí)別游戲)附項(xiàng)目源碼

基于Mediapipe的Python手勢(shì)識(shí)別項(xiàng)目(手勢(shì)識(shí)別游戲)附項(xiàng)目源碼

游戲規(guī)則:屏幕上會(huì)出現(xiàn)蚊子和蜜蜂,當(dāng)手蜷曲握起時(shí),表示抓的動(dòng)作。如果抓到右邊移動(dòng)的蚊子,則會(huì)增加分?jǐn)?shù),如果抓到右邊的蜜蜂,則會(huì)出現(xiàn)被蟄到的聲音 :)

代碼介紹

調(diào)用Mediapipe,定義與手勢(shì)識(shí)別有關(guān)的類(lèi):

這里完成了對(duì)圖片鏡像的翻轉(zhuǎn),在圖上畫(huà)出手勢(shì)線(xiàn)條,并對(duì)手勢(shì)的握起狀態(tài)邏輯進(jìn)行判斷。

import cv2
import mediapipe as mp
from settings import *
import numpy as np
mp_drawing = mp.solutions.drawing_utils
mp_drawing_styles = mp.solutions.drawing_styles
mp_hands = mp.solutions.hands



class HandTracking:
    def __init__(self):
        self.hand_tracking = mp_hands.Hands(min_detection_confidence=0.5, min_tracking_confidence=0.5)
        self.hand_x = 0
        self.hand_y = 0
        self.results = None
        self.hand_closed = False


    def scan_hands(self, image):
        rows, cols, _ = image.shape

        # Flip the image horizontally for a later selfie-view display, and convert
        # the BGR image to RGB.
        image = cv2.cvtColor(cv2.flip(image, 1), cv2.COLOR_BGR2RGB)
        # To improve performance, optionally mark the image as not writeable to
        # pass by reference.
        image.flags.writeable = False
        self.results = self.hand_tracking.process(image)

        # Draw the hand annotations on the image.
        image.flags.writeable = True
        image = cv2.cvtColor(image, cv2.COLOR_RGB2BGR)

        self.hand_closed = False

        print(self.results.multi_hand_landmarks)

        if self.results.multi_hand_landmarks:
            for hand_landmarks in self.results.multi_hand_landmarks:
                x, y = hand_landmarks.landmark[9].x, hand_landmarks.landmark[9].y

                self.hand_x = int(x * SCREEN_WIDTH)
                self.hand_y = int(y * SCREEN_HEIGHT)

                x1, y1 = hand_landmarks.landmark[12].x, hand_landmarks.landmark[12].y

                if y1 > y:
                    self.hand_closed = True

                mp_drawing.draw_landmarks(
                    image,
                    hand_landmarks,
                    mp_hands.HAND_CONNECTIONS,
                    mp_drawing_styles.get_default_hand_landmarks_style(),
                    mp_drawing_styles.get_default_hand_connections_style())
        return image

    def get_hand_center(self):
        return (self.hand_x, self.hand_y)


    def display_hand(self):
        cv2.imshow("image", self.image)
        cv2.waitKey(1)

    def is_hand_closed(self):

        pass


定義游戲類(lèi):

這里導(dǎo)入了音效和背景,并加載界面上方的分?jǐn)?shù)和剩余時(shí)間等。

import pygame
import time
import random
from settings import *
from background import Background
from hand import Hand
from hand_tracking import HandTracking
from mosquito import Mosquito
from bee import Bee
import cv2
import ui

class Game:
    def __init__(self, surface):
        self.surface = surface
        self.background = Background()

        # Load camera
        self.cap = cv2.VideoCapture(0)

        self.sounds = {}
        self.sounds["slap"] = pygame.mixer.Sound(f"Assets/Sounds/slap.wav")
        self.sounds["slap"].set_volume(SOUNDS_VOLUME)
        self.sounds["screaming"] = pygame.mixer.Sound(f"Assets/Sounds/screaming.wav")
        self.sounds["screaming"].set_volume(SOUNDS_VOLUME)


    def reset(self): # reset all the needed variables
        self.hand_tracking = HandTracking()
        self.hand = Hand()
        self.insects = []
        self.insects_spawn_timer = 0
        self.score = 0
        self.game_start_time = time.time()


    def spawn_insects(self):
        t = time.time()
        if t > self.insects_spawn_timer:
            self.insects_spawn_timer = t + MOSQUITOS_SPAWN_TIME

            # increase the probability that the insect will be a bee over time
            nb = (GAME_DURATION-self.time_left)/GAME_DURATION * 100  / 2  # increase from 0 to 50 during all  the game (linear)
            if random.randint(0, 100) < nb:
                self.insects.append(Bee())
            else:
                self.insects.append(Mosquito())

            # spawn a other mosquito after the half of the game
            if self.time_left < GAME_DURATION/2:
                self.insects.append(Mosquito())

    def load_camera(self):
        _, self.frame = self.cap.read()


    def set_hand_position(self):
        self.frame = self.hand_tracking.scan_hands(self.frame)
        (x, y) = self.hand_tracking.get_hand_center()
        self.hand.rect.center = (x, y)

    def draw(self):
        # draw the background
        self.background.draw(self.surface)
        # draw the insects
        for insect in self.insects:
            insect.draw(self.surface)
        # draw the hand
        self.hand.draw(self.surface)
        # draw the score
        ui.draw_text(self.surface, f"Score : {self.score}", (5, 5), COLORS["score"], font=FONTS["medium"],
                    shadow=True, shadow_color=(255,255,255))
        # draw the time left
        timer_text_color = (160, 40, 0) if self.time_left < 5 else COLORS["timer"] # change the text color if less than 5 s left
        ui.draw_text(self.surface, f"Time left : {self.time_left}", (SCREEN_WIDTH//2, 5),  timer_text_color, font=FONTS["medium"],
                    shadow=True, shadow_color=(255,255,255))


    def game_time_update(self):
        self.time_left = max(round(GAME_DURATION - (time.time() - self.game_start_time), 1), 0)



    def update(self):

        self.load_camera()
        self.set_hand_position()
        self.game_time_update()

        self.draw()

        if self.time_left > 0:
            self.spawn_insects()
            (x, y) = self.hand_tracking.get_hand_center()
            self.hand.rect.center = (x, y)
            self.hand.left_click = self.hand_tracking.hand_closed
            print("Hand closed", self.hand.left_click)
            if self.hand.left_click:
                self.hand.image = self.hand.image_smaller.copy()
            else:
                self.hand.image = self.hand.orig_image.copy()
            self.score = self.hand.kill_insects(self.insects, self.score, self.sounds)
            for insect in self.insects:
                insect.move()

        else: # when the game is over
            if ui.button(self.surface, 540, "Continue", click_sound=self.sounds["slap"]):
                return "menu"


        cv2.imshow("Frame", self.frame)
        cv2.waitKey(1)

定義與手勢(shì)動(dòng)作相關(guān)的動(dòng)作類(lèi):

import pygame
import image
from settings import *
from hand_tracking import HandTracking
import cv2

class Hand:
    def __init__(self):
        self.orig_image = image.load("Assets/hand.png", size=(HAND_SIZE, HAND_SIZE))
        self.image = self.orig_image.copy()
        self.image_smaller = image.load("Assets/hand.png", size=(HAND_SIZE - 50, HAND_SIZE - 50))
        self.rect = pygame.Rect(SCREEN_WIDTH//2, SCREEN_HEIGHT//2, HAND_HITBOX_SIZE[0], HAND_HITBOX_SIZE[1])
        self.left_click = False
        #self.hand_tracking = HandTracking()


    def follow_mouse(self): # change the hand pos center at the mouse pos
        self.rect.center = pygame.mouse.get_pos()
        #self.hand_tracking.display_hand()

    def follow_mediapipe_hand(self, x, y):
        self.rect.center = (x, y)

    def draw_hitbox(self, surface):
        pygame.draw.rect(surface, (200, 60, 0), self.rect)


    def draw(self, surface):
        image.draw(surface, self.image, self.rect.center, pos_mode="center")

        if DRAW_HITBOX:
            self.draw_hitbox(surface)


    def on_insect(self, insects): # return a list with all insects that collide with the hand hitbox
        return [insect for insect in insects if self.rect.colliderect(insect.rect)]


    def kill_insects(self, insects, score, sounds): # will kill the insects that collide with the hand when the left mouse button is pressed
        if self.left_click: # if left click
            for insect in self.on_insect(insects):
                insect_score = insect.kill(insects)
                score += insect_score
                sounds["slap"].play()
                if insect_score < 0:
                    sounds["screaming"].play()
        else:
            self.left_click = False
        return score

定義與蚊子移動(dòng)有關(guān)的類(lèi):

首先從圖片中導(dǎo)入了蚊子的形狀信息,并讓蚊子隨機(jī)的進(jìn)行上下左右移動(dòng)

import pygame
import random
import time
import image
from settings import *

class Mosquito:
    def __init__(self):
        #size
        random_size_value = random.uniform(MOSQUITO_SIZE_RANDOMIZE[0], MOSQUITO_SIZE_RANDOMIZE[1])
        size = (int(MOSQUITOS_SIZES[0] * random_size_value), int(MOSQUITOS_SIZES[1] * random_size_value))
        # moving
        moving_direction, start_pos = self.define_spawn_pos(size)
        # sprite
        self.rect = pygame.Rect(start_pos[0], start_pos[1], size[0]//1.4, size[1]//1.4)
        self.images = [image.load("Assets/mosquito/mosquito.png", size=size, flip=moving_direction=="right")]
        self.current_frame = 0
        self.animation_timer = 0


    def define_spawn_pos(self, size): # define the start pos and moving vel of the mosquito
        vel = random.uniform(MOSQUITOS_MOVE_SPEED["min"], MOSQUITOS_MOVE_SPEED["max"])
        moving_direction = random.choice(("left", "right", "up", "down"))
        if moving_direction == "right":
            start_pos = (-size[0], random.randint(size[1], SCREEN_HEIGHT-size[1]))
            self.vel = [vel, 0]
        if moving_direction == "left":
            start_pos = (SCREEN_WIDTH + size[0], random.randint(size[1], SCREEN_HEIGHT-size[1]))
            self.vel = [-vel, 0]
        if moving_direction == "up":
            start_pos = (random.randint(size[0], SCREEN_WIDTH-size[0]), SCREEN_HEIGHT+size[1])
            self.vel = [0, -vel]
        if moving_direction == "down":
            start_pos = (random.randint(size[0], SCREEN_WIDTH-size[0]), -size[1])
            self.vel = [0, vel]
        return moving_direction, start_pos


    def move(self):
        self.rect.move_ip(self.vel)


    def animate(self): # change the frame of the insect when needed
        t = time.time()
        if t > self.animation_timer:
            self.animation_timer = t + ANIMATION_SPEED
            self.current_frame += 1
            if self.current_frame > len(self.images)-1:
                self.current_frame = 0


    def draw_hitbox(self, surface):
        pygame.draw.rect(surface, (200, 60, 0), self.rect)



    def draw(self, surface):
        self.animate()
        image.draw(surface, self.images[self.current_frame], self.rect.center, pos_mode="center")
        if DRAW_HITBOX:
            self.draw_hitbox(surface)


    def kill(self, mosquitos): # remove the mosquito from the list
        mosquitos.remove(self)
        return 1

另外,手勢(shì)識(shí)別相關(guān)的基礎(chǔ)內(nèi)容,可以參考評(píng)論區(qū)的這個(gè)博客

需要源碼(10r)的可以郵箱私信我 yangsober@163.com文章來(lái)源地址http://www.zghlxwxcb.cn/news/detail-500514.html

到了這里,關(guān)于基于Mediapipe的Python手勢(shì)識(shí)別項(xiàng)目(手勢(shì)識(shí)別游戲)附項(xiàng)目源碼的文章就介紹完了。如果您還想了解更多內(nèi)容,請(qǐng)?jiān)谟疑辖撬阉鱐OY模板網(wǎng)以前的文章或繼續(xù)瀏覽下面的相關(guān)文章,希望大家以后多多支持TOY模板網(wǎng)!

本文來(lái)自互聯(lián)網(wǎng)用戶(hù)投稿,該文觀(guān)點(diǎn)僅代表作者本人,不代表本站立場(chǎng)。本站僅提供信息存儲(chǔ)空間服務(wù),不擁有所有權(quán),不承擔(dān)相關(guān)法律責(zé)任。如若轉(zhuǎn)載,請(qǐng)注明出處: 如若內(nèi)容造成侵權(quán)/違法違規(guī)/事實(shí)不符,請(qǐng)點(diǎn)擊違法舉報(bào)進(jìn)行投訴反饋,一經(jīng)查實(shí),立即刪除!

領(lǐng)支付寶紅包贊助服務(wù)器費(fèi)用

相關(guān)文章

  • 使用MediaPipe和OpenCV的Python實(shí)現(xiàn)手勢(shì)識(shí)別

    手勢(shì)識(shí)別技術(shù)是一種非常有用的技術(shù),它可以將人類(lèi)的手勢(shì)轉(zhuǎn)化為計(jì)算機(jī)可以理解的形式,從而實(shí)現(xiàn)更加自然、快速和直觀(guān)的交互方式。本文將介紹一種基于MediaPipe和OpenCV的手勢(shì)識(shí)別技術(shù),可以實(shí)現(xiàn)對(duì)手勢(shì)的實(shí)時(shí)識(shí)別和分析。 MediaPipe是一種開(kāi)源的機(jī)器學(xué)習(xí)框架,可以用于構(gòu)建

    2024年02月14日
    瀏覽(16)
  • Mediapipe手勢(shì)識(shí)別

    Mediapipe手勢(shì)識(shí)別

    代碼: 它訓(xùn)練的時(shí)候是使用了兩個(gè)模型,第一個(gè)是手掌檢測(cè),第二個(gè)是在手掌范圍內(nèi)進(jìn)行關(guān)節(jié)點(diǎn)的檢測(cè)。這里面的三維坐標(biāo)中的Z軸并不是絕對(duì)意義上的Z軸,而是相對(duì)于手腕的位置,正值說(shuō)明在手腕的前方,負(fù)值在手腕的后方。x和y都是0~1之間的數(shù)字(經(jīng)過(guò)歸一化后的數(shù)字,

    2024年02月05日
    瀏覽(19)
  • Opencv + MediaPipe -> 手勢(shì)識(shí)別

    Opencv + MediaPipe -> 手勢(shì)識(shí)別

    一、概述 ????????OpenCV(Open Source Computer Vision Library)是一個(gè)跨平臺(tái)的計(jì)算機(jī)視覺(jué)庫(kù),它提供了許多用于圖像和視頻處理的功能,包括圖像和視頻的讀取、預(yù)處理、特征提取、特征匹配、目標(biāo)檢測(cè)等。OpenCV是C++編寫(xiě)的,也提供了Python、Java等語(yǔ)言的接口,可以方便地在不同

    2024年02月05日
    瀏覽(21)
  • Mediapipe手勢(shì)識(shí)別,并與unity通信

    Mediapipe手勢(shì)識(shí)別,并與unity通信

    Mediapipe是goole的一個(gè)開(kāi)源項(xiàng)目,支持跨平臺(tái)的常用ML方案,詳情請(qǐng)戳下面鏈接 MediaPipe Mediapipe底層封裝了手勢(shì)識(shí)別的具體實(shí)現(xiàn)內(nèi)容,而在Python中搭建完環(huán)境后經(jīng)過(guò)很簡(jiǎn)單的調(diào)用就能夠?qū)崿F(xiàn)手勢(shì)識(shí)別 環(huán)境如下: pip install mediapipe pip install opencv-python 簡(jiǎn)單的實(shí)現(xiàn),代碼很少,代碼如

    2024年02月11日
    瀏覽(24)
  • mediapipe 手勢(shì)節(jié)點(diǎn)識(shí)別自動(dòng)控制音量

    mediapipe 手勢(shì)節(jié)點(diǎn)識(shí)別自動(dòng)控制音量

    參考:https://www.computervision.zone/topic/volumehandcontrol-py/ 主函數(shù): VolumeHandControl.py

    2024年02月11日
    瀏覽(22)
  • 畢設(shè)項(xiàng)目分享 基于機(jī)器視覺(jué)opencv的手勢(shì)檢測(cè) 手勢(shì)識(shí)別 算法 - 深度學(xué)習(xí) 卷積神經(jīng)網(wǎng)絡(luò) opencv python

    畢設(shè)項(xiàng)目分享 基于機(jī)器視覺(jué)opencv的手勢(shì)檢測(cè) 手勢(shì)識(shí)別 算法 - 深度學(xué)習(xí) 卷積神經(jīng)網(wǎng)絡(luò) opencv python

    今天學(xué)長(zhǎng)向大家介紹一個(gè)機(jī)器視覺(jué)項(xiàng)目 基于機(jī)器視覺(jué)opencv的手勢(shì)檢測(cè) 手勢(shì)識(shí)別 算法 普通機(jī)器視覺(jué)手勢(shì)檢測(cè)的基本流程如下: 其中輪廓的提取,多邊形擬合曲線(xiàn)的求法,凸包集和凹陷集的求法都是采用opencv中自帶的函數(shù)。手勢(shì)數(shù)字的識(shí)別是利用凸包點(diǎn)以及凹陷點(diǎn)和手部中心

    2024年02月03日
    瀏覽(116)
  • 基于OpenCV的手勢(shì)1~5識(shí)別系統(tǒng)(源碼&環(huán)境部署)

    基于OpenCV的手勢(shì)1~5識(shí)別系統(tǒng)(源碼&環(huán)境部署)

    項(xiàng)目參考AAAI Association for the Advancement of Artificial Intelligence 研究背景與意義: 隨著計(jì)算機(jī)視覺(jué)技術(shù)的快速發(fā)展,手勢(shì)識(shí)別系統(tǒng)在人機(jī)交互、虛擬現(xiàn)實(shí)、智能監(jiān)控等領(lǐng)域得到了廣泛應(yīng)用。手勢(shì)識(shí)別系統(tǒng)可以通過(guò)分析人體的手勢(shì)動(dòng)作,實(shí)現(xiàn)與計(jì)算機(jī)的自然交互,提高用戶(hù)體驗(yàn)和操

    2024年02月04日
    瀏覽(23)
  • 基于mediapipe和opencv的手勢(shì)控制電腦鼠標(biāo)

    基于mediapipe和opencv的手勢(shì)控制電腦鼠標(biāo)

    通過(guò)我的上一篇文章,可以了解到mediapipe關(guān)于手部檢測(cè)的使用方法。這時(shí)我們就可以進(jìn)行一些更加炫酷的操作。這篇文章我就來(lái)講解一下如何用手勢(shì)來(lái)控制電腦鼠標(biāo)。 在開(kāi)始之前我們要介紹一個(gè)能夠操作電腦鼠標(biāo)的庫(kù)pyautogui,這里我簡(jiǎn)單介紹一下該庫(kù)的一些函數(shù),方便大家觀(guān)

    2024年02月07日
    瀏覽(20)
  • 課程設(shè)計(jì)——基于opencv的手勢(shì)識(shí)別【真】完整項(xiàng)目

    課程設(shè)計(jì)——基于opencv的手勢(shì)識(shí)別【真】完整項(xiàng)目

    一個(gè)簡(jiǎn)單的手勢(shì)識(shí)別,過(guò)程很簡(jiǎn)單,主要用到了 opencv 和 sklearn 和 tkinter 三個(gè)庫(kù),下面我將會(huì)展示整個(gè)項(xiàng)目的代碼和簡(jiǎn)要說(shuō)明,并且 下面將會(huì)是完整的已經(jīng)全部集成在三個(gè) .py 文件的代碼,你只需要將三個(gè)文件分別執(zhí)行就可以訓(xùn)練出自己的手勢(shì)識(shí)別模型 項(xiàng)目思想: 通過(guò)顏色尋

    2024年02月04日
    瀏覽(21)
  • Python實(shí)戰(zhàn)項(xiàng)目:手勢(shì)識(shí)別控制電腦音量

    Python實(shí)戰(zhàn)項(xiàng)目:手勢(shì)識(shí)別控制電腦音量

    今天給大家?guī)?lái)一個(gè)OpenCV的實(shí)戰(zhàn)小項(xiàng)目——手勢(shì)識(shí)別控制電腦音量 先上個(gè)效果圖: 通過(guò)大拇指和食指間的開(kāi)合距離來(lái)調(diào)節(jié)電腦音量,即通過(guò)識(shí)別大拇指與食指這兩個(gè)關(guān)鍵點(diǎn)之間的距離來(lái)控制電腦音量大小 技術(shù)要學(xué)會(huì)分享、交流,不建議閉門(mén)造車(chē)。一個(gè)人走的很快、一堆人可

    2024年02月09日
    瀏覽(17)

覺(jué)得文章有用就打賞一下文章作者

支付寶掃一掃打賞

博客贊助

微信掃一掃打賞

請(qǐng)作者喝杯咖啡吧~博客贊助

支付寶掃一掃領(lǐng)取紅包,優(yōu)惠每天領(lǐng)

二維碼1

領(lǐng)取紅包

二維碼2

領(lǐng)紅包