国产 无码 综合区,色欲AV无码国产永久播放,无码天堂亚洲国产AV,国产日韩欧美女同一区二区

基于OpenCV 和 Dlib 進(jìn)行頭部姿態(tài)估計(jì)

這篇具有很好參考價(jià)值的文章主要介紹了基于OpenCV 和 Dlib 進(jìn)行頭部姿態(tài)估計(jì)。希望對大家有所幫助。如果存在錯(cuò)誤或未考慮完全的地方,請大家不吝賜教,您也可以點(diǎn)擊"舉報(bào)違法"按鈕提交疑問。

寫在前面


  • 工作中遇到,簡單整理
  • 博文內(nèi)容涉及基于 OpenCV 和 Dlib頭部姿態(tài)評估的簡單Demo
  • 理解不足小伙伴幫忙指正

廬山煙雨浙江潮,未到千般恨不消。到得還來別無事,廬山煙雨浙江潮。 ----《廬山煙雨浙江潮》蘇軾


https://github.com/LIRUILONGS/Head-posture-detection-dlib-opencv-.git

實(shí)驗(yàn)項(xiàng)目以上傳,只需 git 克隆,安裝需要的 pytohn 包,就可以開始使用了,但是需要說明的是 Dlib 的基于 HOG特征和SVM分類器的人臉檢測器很一般,很多臉都檢測不到,實(shí)際情況中可以考慮使用深度學(xué)習(xí)模型來做關(guān)鍵點(diǎn)檢測,然后評估姿態(tài)。可以查看文章末尾大佬的開源項(xiàng)目

實(shí)現(xiàn)效果

Demo
原圖
基于OpenCV 和 Dlib 進(jìn)行頭部姿態(tài)估計(jì),AI認(rèn)知HarmonyOS筆記,opencv,計(jì)算機(jī)視覺,python,頭部姿態(tài)評估,人臉朝向判斷
基于OpenCV 和 Dlib 進(jìn)行頭部姿態(tài)估計(jì),AI認(rèn)知HarmonyOS筆記,opencv,計(jì)算機(jī)視覺,python,頭部姿態(tài)評估,人臉朝向判斷
特征點(diǎn)標(biāo)記后
基于OpenCV 和 Dlib 進(jìn)行頭部姿態(tài)估計(jì),AI認(rèn)知HarmonyOS筆記,opencv,計(jì)算機(jī)視覺,python,頭部姿態(tài)評估,人臉朝向判斷
姿態(tài)標(biāo)記
基于OpenCV 和 Dlib 進(jìn)行頭部姿態(tài)估計(jì),AI認(rèn)知HarmonyOS筆記,opencv,計(jì)算機(jī)視覺,python,頭部姿態(tài)評估,人臉朝向判斷
姿態(tài)對應(yīng)的Yaw,Pitch,Roll 度數(shù)
基于OpenCV 和 Dlib 進(jìn)行頭部姿態(tài)估計(jì),AI認(rèn)知HarmonyOS筆記,opencv,計(jì)算機(jī)視覺,python,頭部姿態(tài)評估,人臉朝向判斷
基于OpenCV 和 Dlib 進(jìn)行頭部姿態(tài)估計(jì),AI認(rèn)知HarmonyOS筆記,opencv,計(jì)算機(jī)視覺,python,頭部姿態(tài)評估,人臉朝向判斷

步驟

三個(gè)主要步驟

人臉檢測

人臉檢測:引入人臉檢測器 dlib.get_frontal_face_detector() 以檢測包含人臉的圖片,多個(gè)人臉會(huì)選擇面積最大的人臉。

dlib.get_frontal_face_detector()dlib 庫中的一個(gè)函數(shù),用于獲取一個(gè)基于HOG特征和SVM分類器的人臉檢測器。該函數(shù)返回一個(gè)可以用于檢測圖像中人臉的對象。

具體來說,HOG(Histogram of Oriented Gradients,梯度方向直方圖)是一種常用于圖像識別中的特征描述子,SVM(Support Vector Machine,支持向量機(jī))是一種常用的分類器。將HOG特征與SVM分類器結(jié)合起來,可以得到一個(gè)有效的人臉檢測器。

在使用 dlib.get_frontal_face_detector()函數(shù)時(shí),只需將待檢測的圖像作為參數(shù)傳入,即可得到一個(gè)用于檢測人臉的對象。一個(gè)Demo

import dlib
import cv2

# 讀取圖像
img = cv2.imread('image.jpg')

# 獲取人臉檢測器
detector = dlib.get_frontal_face_detector()

# 在圖像中檢測人臉
faces = detector(img)

# 輸出檢測到的人臉數(shù)
print("檢測到的人臉數(shù)為:", len(faces))

面部特征點(diǎn)檢測

面部特征點(diǎn)檢測,利用預(yù)訓(xùn)練模型 shape_predictor_68_face_landmarks.dat 以人臉圖像為輸入,輸出68個(gè)人臉特征點(diǎn)。

shape_predictor_68_face_landmarks.dat 是基于 dlib 庫中的人臉特征點(diǎn)檢測模型,該模型使用了基于 HOG 特征和 SVM 分類器的人臉檢測器來檢測圖像中的人臉,并使用回歸算法來預(yù)測人臉的 68 個(gè)關(guān)鍵點(diǎn)位置。這些關(guān)鍵點(diǎn)包括眼睛、鼻子、嘴巴等部位,可以用于進(jìn)行人臉識別、表情識別、姿態(tài)估計(jì)等應(yīng)用。

這個(gè)模型文件可以在dlib的官方網(wǎng)站上下載。在使用它之前,需要安裝dlib庫并將模型文件加載到程序中。

predictor = dlib.shape_predictor(r".\shape_predictor_68_face_landmarks.dat")
姿勢估計(jì)

姿勢估計(jì)。在獲得 68 個(gè)面部特征點(diǎn)后,選擇部分特征點(diǎn),通過 PnP算法計(jì)算姿勢 Yaw、Pitch、Roll 度數(shù)

    (success, rotation_vector, translation_vector) = cv2.solvePnP(model_points, image_points, camera_matrix,
                                                                  dist_coeffs, flags=cv2.SOLVEPNP_ITERATIVE)

Yaw、Pitch、Roll 是用于描述物體或相機(jī)在三維空間中的旋轉(zhuǎn)角度的術(shù)語,常用于姿態(tài)估計(jì)和姿態(tài)控制中。

  • Yaw(左右):繞垂直于物體或相機(jī)的軸旋轉(zhuǎn)的角度,也稱為偏航角。通常以 z 軸為軸進(jìn)行旋轉(zhuǎn),正值表示逆時(shí)針旋轉(zhuǎn),負(fù)值表示順時(shí)針旋轉(zhuǎn)。
  • Pitch(上下):繞物體或相機(jī)的橫軸旋轉(zhuǎn)的角度,也稱為俯仰角。通常以 x 軸為軸進(jìn)行旋轉(zhuǎn),正值表示向上旋轉(zhuǎn),負(fù)值表示向下旋轉(zhuǎn)。
  • Roll(彎曲):繞物體或相機(jī)的縱軸旋轉(zhuǎn)的角度,也稱為翻滾角。通常以 y 軸為軸進(jìn)行旋轉(zhuǎn),正值表示向右旋轉(zhuǎn),負(fù)值表示向左旋轉(zhuǎn)。

這三個(gè)角度通常以歐拉角的形式表示,可以用于描述物體或相機(jī)的姿態(tài)信息。在計(jì)算機(jī)視覺中,常用于人臉識別、動(dòng)作捕捉、機(jī)器人控制等應(yīng)用場景。

完整 Demo 代碼

#!/usr/bin/env python
# -*- encoding: utf-8 -*-
"""
@File    :   face_ypr_demo.py
@Time    :   2023/06/05 21:32:45
@Author  :   Li Ruilong
@Version :   1.0
@Contact :   liruilonger@gmail.com
@Desc    :   根據(jù)68個(gè)人臉關(guān)鍵點(diǎn),獲取人頭部姿態(tài)評估
"""

# here put the import lib

import cv2
import numpy as np
import dlib
import math
import uuid

# 頭部姿態(tài)檢測(dlib+opencv)

detector = dlib.get_frontal_face_detector()
predictor = dlib.shape_predictor(r".\shape_predictor_68_face_landmarks.dat")
POINTS_NUM_LANDMARK = 68


# shape_predictor_68_face_landmarks.dat 是一個(gè)預(yù)訓(xùn)練的人臉關(guān)鍵點(diǎn)檢測模型,可以用于識別人臉的68個(gè)關(guān)鍵點(diǎn),如眼睛、鼻子、嘴巴等。這個(gè)模型可以被用于人臉識別、人臉表情分析、面部姿勢估計(jì)等領(lǐng)域。
# 它是由dlib庫提供的,可以在Python中使用。如果你想使用它,可以在dlib的官方網(wǎng)站上下載。

# 獲取最大的人臉
def _largest_face(dets):
    """
    @Time    :   2023/06/05 21:30:37
    @Author  :   liruilonger@gmail.com
    @Version :   1.0
    @Desc    :   從一個(gè)由 dlib 庫檢測到的人臉框列表中,找到最大的人臉框,并返回該框在列表中的索
                如果只有一個(gè)人臉,直接返回
                 Args:
                   dets: 一個(gè)由 `dlib.rectangle` 類型的對象組成的列表,每個(gè)對象表示一個(gè)人臉框
                 Returns:
                   人臉?biāo)饕?    """
    # 如果列表長度為1,則直接返回
    if len(dets) == 1:
        return 0
    # 計(jì)算每個(gè)人臉框的面積
    face_areas = [(det.right() - det.left()) * (det.bottom() - det.top()) for det in dets]
    import heapq
    # 找到面積最大的人臉框的索引
    largest_area = face_areas[0]
    largest_index = 0
    for index in range(1, len(dets)):
        if face_areas[index] > largest_area:
            largest_index = index
            largest_area = face_areas[index]
    # 打印最大人臉框的索引和總?cè)四様?shù)
    print("largest_face index is {} in {} faces".format(largest_index, len(dets)))

    return largest_index


def get_image_points_from_landmark_shape(landmark_shape):
    """
    @Time    :   2023/06/05 22:30:02
    @Author  :   liruilonger@gmail.com
    @Version :   1.0
    @Desc    :   從dlib的檢測結(jié)果抽取姿態(tài)估計(jì)需要的點(diǎn)坐標(biāo)
                 Args:
                   landmark_shape:  所有的位置點(diǎn)
                 Returns:
                   void
    """

    if landmark_shape.num_parts != POINTS_NUM_LANDMARK:
        print("ERROR:landmark_shape.num_parts-{}".format(landmark_shape.num_parts))
        return -1, None

    # 2D image points. If you change the image, you need to change vector

    image_points = np.array([
        (landmark_shape.part(17).x, landmark_shape.part(17).y),  # 17 left brow left corner
        (landmark_shape.part(21).x, landmark_shape.part(21).y),  # 21 left brow right corner
        (landmark_shape.part(22).x, landmark_shape.part(22).y),  # 22 right brow left corner
        (landmark_shape.part(26).x, landmark_shape.part(26).y),  # 26 right brow right corner
        (landmark_shape.part(36).x, landmark_shape.part(36).y),  # 36 left eye left corner
        (landmark_shape.part(39).x, landmark_shape.part(39).y),  # 39 left eye right corner
        (landmark_shape.part(42).x, landmark_shape.part(42).y),  # 42 right eye left corner
        (landmark_shape.part(45).x, landmark_shape.part(45).y),  # 45 right eye right corner
        (landmark_shape.part(31).x, landmark_shape.part(31).y),  # 31 nose left corner
        (landmark_shape.part(35).x, landmark_shape.part(35).y),  # 35 nose right corner
        (landmark_shape.part(48).x, landmark_shape.part(48).y),  # 48 mouth left corner
        (landmark_shape.part(54).x, landmark_shape.part(54).y),  # 54 mouth right corner
        (landmark_shape.part(57).x, landmark_shape.part(57).y),  # 57 mouth central bottom corner
        (landmark_shape.part(8).x, landmark_shape.part(8).y),  # 8 chin corner
    ], dtype="double")
    return 0, image_points


def get_image_points(img):
    """
    @Time    :   2023/06/05 22:30:43
    @Author  :   liruilonger@gmail.com
    @Version :   1.0
    @Desc    :   用dlib檢測關(guān)鍵點(diǎn),返回姿態(tài)估計(jì)需要的幾個(gè)點(diǎn)坐標(biāo)
                 Args:
                   
                 Returns:
                   void
    """

    gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)  # 圖片調(diào)整為灰色

    dets = detector(img, 0)

    if 0 == len(dets):
        print("ERROR: found no face")
        return -1, None
    largest_index = _largest_face(dets)
    face_rectangle = dets[largest_index]

    landmark_shape = predictor(img, face_rectangle)
    draw = im.copy()
    cv2.circle(draw, (landmark_shape.part(0).x, landmark_shape.part(0).y), 2, (0, 255, 0), -1)
    cv2.circle(draw, (landmark_shape.part(1).x, landmark_shape.part(1).y), 2, (0, 255, 0), -1)
    cv2.circle(draw, (landmark_shape.part(2).x, landmark_shape.part(2).y), 2, (0, 255, 0), -1)
    cv2.circle(draw, (landmark_shape.part(3).x, landmark_shape.part(3).y), 2, (0, 255, 0), -1)
    cv2.circle(draw, (landmark_shape.part(4).x, landmark_shape.part(4).y), 2, (0, 255, 0), -1)
    cv2.circle(draw, (landmark_shape.part(5).x, landmark_shape.part(5).y), 2, (0, 255, 0), -1)
    cv2.circle(draw, (landmark_shape.part(6).x, landmark_shape.part(6).y), 2, (0, 255, 0), -1)
    cv2.circle(draw, (landmark_shape.part(7).x, landmark_shape.part(7).y), 2, (0, 255, 0), -1)
    cv2.circle(draw, (landmark_shape.part(8).x, landmark_shape.part(8).y), 2, (0, 255, 0), -1)
    cv2.circle(draw, (landmark_shape.part(9).x, landmark_shape.part(9).y), 2, (0, 255, 0), -1)
    cv2.circle(draw, (landmark_shape.part(10).x, landmark_shape.part(10).y), 2, (0, 255, 0), -1)
    cv2.circle(draw, (landmark_shape.part(11).x, landmark_shape.part(11).y), 2, (0, 255, 0), -1)
    cv2.circle(draw, (landmark_shape.part(12).x, landmark_shape.part(12).y), 2, (0, 255, 0), -1)
    cv2.circle(draw, (landmark_shape.part(13).x, landmark_shape.part(13).y), 2, (0, 255, 0), -1)
    cv2.circle(draw, (landmark_shape.part(14).x, landmark_shape.part(14).y), 2, (0, 255, 0), -1)
    cv2.circle(draw, (landmark_shape.part(15).x, landmark_shape.part(15).y), 2, (0, 255, 0), -1)
    cv2.circle(draw, (landmark_shape.part(16).x, landmark_shape.part(16).y), 2, (0, 255, 0), -1)
    cv2.circle(draw, (landmark_shape.part(17).x, landmark_shape.part(17).y), 2, (0, 255, 0), -1)
    cv2.circle(draw, (landmark_shape.part(18).x, landmark_shape.part(18).y), 2, (0, 255, 0), -1)
    cv2.circle(draw, (landmark_shape.part(19).x, landmark_shape.part(19).y), 2, (0, 255, 0), -1)
    cv2.circle(draw, (landmark_shape.part(20).x, landmark_shape.part(20).y), 2, (0, 255, 0), -1)
    cv2.circle(draw, (landmark_shape.part(21).x, landmark_shape.part(21).y), 2, (0, 255, 0), -1)
    cv2.circle(draw, (landmark_shape.part(22).x, landmark_shape.part(22).y), 2, (0, 255, 0), -1)
    cv2.circle(draw, (landmark_shape.part(23).x, landmark_shape.part(23).y), 2, (0, 255, 0), -1)
    cv2.circle(draw, (landmark_shape.part(24).x, landmark_shape.part(24).y), 2, (0, 255, 0), -1)
    cv2.circle(draw, (landmark_shape.part(25).x, landmark_shape.part(25).y), 2, (0, 255, 0), -1)
    cv2.circle(draw, (landmark_shape.part(26).x, landmark_shape.part(26).y), 2, (0, 255, 0), -1)
    cv2.circle(draw, (landmark_shape.part(27).x, landmark_shape.part(27).y), 2, (0, 255, 0), -1)
    cv2.circle(draw, (landmark_shape.part(28).x, landmark_shape.part(28).y), 2, (0, 255, 0), -1)
    cv2.circle(draw, (landmark_shape.part(29).x, landmark_shape.part(29).y), 2, (0, 255, 0), -1)
    cv2.circle(draw, (landmark_shape.part(30).x, landmark_shape.part(30).y), 2, (0, 255, 0), -1)
    cv2.circle(draw, (landmark_shape.part(31).x, landmark_shape.part(31).y), 2, (0, 255, 0), -1)
    cv2.circle(draw, (landmark_shape.part(32).x, landmark_shape.part(32).y), 2, (0, 255, 0), -1)
    cv2.circle(draw, (landmark_shape.part(33).x, landmark_shape.part(33).y), 2, (0, 255, 0), -1)
    cv2.circle(draw, (landmark_shape.part(34).x, landmark_shape.part(34).y), 2, (0, 255, 0), -1)
    cv2.circle(draw, (landmark_shape.part(35).x, landmark_shape.part(35).y), 2, (0, 255, 0), -1)
    cv2.circle(draw, (landmark_shape.part(36).x, landmark_shape.part(36).y), 2, (0, 255, 0), -1)
    cv2.circle(draw, (landmark_shape.part(37).x, landmark_shape.part(37).y), 2, (0, 255, 0), -1)
    cv2.circle(draw, (landmark_shape.part(38).x, landmark_shape.part(38).y), 2, (0, 255, 0), -1)
    cv2.circle(draw, (landmark_shape.part(39).x, landmark_shape.part(39).y), 2, (0, 255, 0), -1)
    cv2.circle(draw, (landmark_shape.part(40).x, landmark_shape.part(40).y), 2, (0, 255, 0), -1)
    cv2.circle(draw, (landmark_shape.part(41).x, landmark_shape.part(41).y), 2, (0, 255, 0), -1)
    cv2.circle(draw, (landmark_shape.part(42).x, landmark_shape.part(42).y), 2, (0, 255, 0), -1)
    cv2.circle(draw, (landmark_shape.part(43).x, landmark_shape.part(43).y), 2, (0, 255, 0), -1)
    cv2.circle(draw, (landmark_shape.part(44).x, landmark_shape.part(44).y), 2, (0, 255, 0), -1)
    cv2.circle(draw, (landmark_shape.part(45).x, landmark_shape.part(45).y), 2, (0, 255, 0), -1)
    cv2.circle(draw, (landmark_shape.part(46).x, landmark_shape.part(46).y), 2, (0, 255, 0), -1)
    cv2.circle(draw, (landmark_shape.part(47).x, landmark_shape.part(47).y), 2, (0, 255, 0), -1)
    cv2.circle(draw, (landmark_shape.part(48).x, landmark_shape.part(48).y), 2, (0, 255, 0), -1)
    cv2.circle(draw, (landmark_shape.part(49).x, landmark_shape.part(49).y), 2, (0, 255, 0), -1)
    cv2.circle(draw, (landmark_shape.part(50).x, landmark_shape.part(50).y), 2, (0, 255, 0), -1)
    cv2.circle(draw, (landmark_shape.part(51).x, landmark_shape.part(51).y), 2, (0, 255, 0), -1)
    cv2.circle(draw, (landmark_shape.part(52).x, landmark_shape.part(52).y), 2, (0, 255, 0), -1)
    cv2.circle(draw, (landmark_shape.part(53).x, landmark_shape.part(53).y), 2, (0, 255, 0), -1)
    cv2.circle(draw, (landmark_shape.part(54).x, landmark_shape.part(54).y), 2, (0, 255, 0), -1)
    cv2.circle(draw, (landmark_shape.part(55).x, landmark_shape.part(55).y), 2, (0, 255, 0), -1)
    cv2.circle(draw, (landmark_shape.part(56).x, landmark_shape.part(56).y), 2, (0, 255, 0), -1)
    cv2.circle(draw, (landmark_shape.part(57).x, landmark_shape.part(57).y), 2, (0, 255, 0), -1)
    cv2.circle(draw, (landmark_shape.part(58).x, landmark_shape.part(58).y), 2, (0, 255, 0), -1)
    cv2.circle(draw, (landmark_shape.part(59).x, landmark_shape.part(59).y), 2, (0, 255, 0), -1)
    cv2.circle(draw, (landmark_shape.part(60).x, landmark_shape.part(60).y), 2, (0, 255, 0), -1)
    cv2.circle(draw, (landmark_shape.part(61).x, landmark_shape.part(61).y), 2, (0, 255, 0), -1)
    cv2.circle(draw, (landmark_shape.part(62).x, landmark_shape.part(62).y), 2, (0, 255, 0), -1)
    cv2.circle(draw, (landmark_shape.part(63).x, landmark_shape.part(63).y), 2, (0, 255, 0), -1)
    cv2.circle(draw, (landmark_shape.part(64).x, landmark_shape.part(64).y), 2, (0, 255, 0), -1)
    cv2.circle(draw, (landmark_shape.part(65).x, landmark_shape.part(65).y), 2, (0, 255, 0), -1)
    cv2.circle(draw, (landmark_shape.part(66).x, landmark_shape.part(66).y), 2, (0, 255, 0), -1)
    cv2.circle(draw, (landmark_shape.part(67).x, landmark_shape.part(67).y), 2, (0, 255, 0), -1)

    # 部分關(guān)鍵點(diǎn)特殊標(biāo)記
    cv2.circle(draw, (landmark_shape.part(17).x, landmark_shape.part(17).y), 2, (0, 165, 255),
               -1)  # 17 left brow left corner
    cv2.circle(draw, (landmark_shape.part(21).x, landmark_shape.part(21).y), 2, (0, 165, 255),
               -1)  # 21 left brow right corner
    cv2.circle(draw, (landmark_shape.part(22).x, landmark_shape.part(22).y), 2, (0, 165, 255),
               -1)  # 22 right brow left corner
    cv2.circle(draw, (landmark_shape.part(26).x, landmark_shape.part(26).y), 2, (0, 165, 255),
               -1)  # 26 right brow right corner
    cv2.circle(draw, (landmark_shape.part(36).x, landmark_shape.part(36).y), 2, (0, 165, 255),
               -1)  # 36 left eye left corner
    cv2.circle(draw, (landmark_shape.part(39).x, landmark_shape.part(39).y), 2, (0, 165, 255),
               -1)  # 39 left eye right corner
    cv2.circle(draw, (landmark_shape.part(42).x, landmark_shape.part(42).y), 2, (0, 165, 255),
               -1)  # 42 right eye left corner
    cv2.circle(draw, (landmark_shape.part(45).x, landmark_shape.part(45).y), 2, (0, 165, 255),
               -1)  # 45 right eye right corner
    cv2.circle(draw, (landmark_shape.part(31).x, landmark_shape.part(31).y), 2, (0, 165, 255),
               -1)  # 31 nose left corner
    cv2.circle(draw, (landmark_shape.part(35).x, landmark_shape.part(35).y), 2, (0, 165, 255),
               -1)  # 35 nose right corner
    cv2.circle(draw, (landmark_shape.part(48).x, landmark_shape.part(48).y), 2, (0, 165, 255),
               -1)  # 48 mouth left corner
    cv2.circle(draw, (landmark_shape.part(54).x, landmark_shape.part(54).y), 2, (0, 165, 255),
               -1)  # 54 mouth right corner
    cv2.circle(draw, (landmark_shape.part(57).x, landmark_shape.part(57).y), 2, (0, 165, 255),
               -1)  # 57 mouth central bottom corner
    cv2.circle(draw, (landmark_shape.part(8).x, landmark_shape.part(8).y), 2, (0, 165, 255), -1)

    # 保存關(guān)鍵點(diǎn)標(biāo)記后的圖片
    cv2.imwrite('new_' + "KeyPointDetection.jpg", draw)

    return get_image_points_from_landmark_shape(landmark_shape)


def get_pose_estimation(img_size, image_points):
    """
    @Time    :   2023/06/05 22:31:31
    @Author  :   liruilonger@gmail.com
    @Version :   1.0
    @Desc    :   獲取旋轉(zhuǎn)向量和平移向量
                 Args:
                   
                 Returns:
                   void
    """

    # 3D model points.
    model_points = np.array([
        (6.825897, 6.760612, 4.402142),  # 33 left brow left corner
        (1.330353, 7.122144, 6.903745),  # 29 left brow right corner
        (-1.330353, 7.122144, 6.903745),  # 34 right brow left corner
        (-6.825897, 6.760612, 4.402142),  # 38 right brow right corner
        (5.311432, 5.485328, 3.987654),  # 13 left eye left corner
        (1.789930, 5.393625, 4.413414),  # 17 left eye right corner
        (-1.789930, 5.393625, 4.413414),  # 25 right eye left corner
        (-5.311432, 5.485328, 3.987654),  # 21 right eye right corner
        (2.005628, 1.409845, 6.165652),  # 55 nose left corner
        (-2.005628, 1.409845, 6.165652),  # 49 nose right corner
        (2.774015, -2.080775, 5.048531),  # 43 mouth left corner
        (-2.774015, -2.080775, 5.048531),  # 39 mouth right corner
        (0.000000, -3.116408, 6.097667),  # 45 mouth central bottom corner
        (0.000000, -7.415691, 4.070434)  # 6 chin corner
    ])
    # Camera internals

    focal_length = img_size[1]
    center = (img_size[1] / 2, img_size[0] / 2)
    camera_matrix = np.array(
        [[focal_length, 0, center[0]],
         [0, focal_length, center[1]],
         [0, 0, 1]], dtype="double"
    )

    dist_coeffs = np.array([7.0834633684407095e-002, 6.9140193737175351e-002, 0.0, 0.0, -1.3073460323689292e+000],
                           dtype="double")  # Assuming no lens distortion

    (success, rotation_vector, translation_vector) = cv2.solvePnP(model_points, image_points, camera_matrix,
                                                                  dist_coeffs, flags=cv2.SOLVEPNP_ITERATIVE)

    # print("Rotation Vector:\n {}".format(rotation_vector))
    # print("Translation Vector:\n {}".format(translation_vector))
    return success, rotation_vector, translation_vector, camera_matrix, dist_coeffs


def draw_annotation_box(image, rotation_vector, translation_vector, camera_matrix, dist_coeefs, color=(0, 255, 0),
                        line_width=2):
    """
    @Time    :   2023/06/05 22:09:14
    @Author  :   liruilonger@gmail.com
    @Version :   1.0
    @Desc    :   標(biāo)記一個(gè)人臉朝向的3D框
                 Args:
                   
                 Returns:
                   void
    """

    """Draw a 3D box as annotation of pose"""
    point_3d = []
    rear_size = 10
    rear_depth = 0
    point_3d.append((-rear_size, -rear_size, rear_depth))
    point_3d.append((-rear_size, rear_size, rear_depth))
    point_3d.append((rear_size, rear_size, rear_depth))
    point_3d.append((rear_size, -rear_size, rear_depth))
    point_3d.append((-rear_size, -rear_size, rear_depth))

    front_size = 10
    # 高度
    front_depth = 10
    point_3d.append((-front_size, -front_size, front_depth))
    point_3d.append((-front_size, front_size, front_depth))
    point_3d.append((front_size, front_size, front_depth))
    point_3d.append((front_size, -front_size, front_depth))
    point_3d.append((-front_size, -front_size, front_depth))
    point_3d = np.array(point_3d, dtype=np.float32).reshape(-1, 3)

    # Map to 2d image points
    (point_2d, _) = cv2.projectPoints(point_3d,
                                      rotation_vector,
                                      translation_vector,
                                      camera_matrix,
                                      dist_coeefs)
    point_2d = np.int32(point_2d.reshape(-1, 2))

    # Draw all the lines
    cv2.polylines(image, [point_2d], True, color, line_width, cv2.LINE_AA)
    cv2.line(image, tuple(point_2d[1]), tuple(
        point_2d[6]), color, line_width, cv2.LINE_AA)
    cv2.line(image, tuple(point_2d[2]), tuple(
        point_2d[7]), color, line_width, cv2.LINE_AA)
    cv2.line(image, tuple(point_2d[3]), tuple(
        point_2d[8]), color, line_width, cv2.LINE_AA)


# 從旋轉(zhuǎn)向量轉(zhuǎn)換為歐拉角
def get_euler_angle(rotation_vector):
    """
    @Time    :   2023/06/05 22:31:52
    @Author  :   liruilonger@gmail.com
    @Version :   1.0
    @Desc    :   從旋轉(zhuǎn)向量轉(zhuǎn)換為歐拉角
                 Args:
                   
                 Returns:
                   void
    """

    # calculate rotation angles
    theta = cv2.norm(rotation_vector, cv2.NORM_L2)

    # transformed to quaterniond
    w = math.cos(theta / 2)
    x = math.sin(theta / 2) * rotation_vector[0][0] / theta
    y = math.sin(theta / 2) * rotation_vector[1][0] / theta
    z = math.sin(theta / 2) * rotation_vector[2][0] / theta

    ysqr = y * y
    # pitch (x-axis rotation)
    t0 = 2.0 * (w * x + y * z)
    t1 = 1.0 - 2.0 * (x * x + ysqr)

    # print('t0:{}, t1:{}'.format(t0, t1))
    pitch = math.atan2(t0, t1)

    # yaw (y-axis rotation)
    t2 = 2.0 * (w * y - z * x)
    if t2 > 1.0:
        t2 = 1.0
    if t2 < -1.0:
        t2 = -1.0
    yaw = math.asin(t2)

    # roll (z-axis rotation)
    t3 = 2.0 * (w * z + x * y)
    t4 = 1.0 - 2.0 * (ysqr + z * z)
    roll = math.atan2(t3, t4)

    print('pitch:{}, yaw:{}, roll:{}'.format(pitch, yaw, roll))

    # 單位轉(zhuǎn)換:將弧度轉(zhuǎn)換為度
    pitch_degree = int((pitch / math.pi) * 180)
    yaw_degree = int((yaw / math.pi) * 180)
    roll_degree = int((roll / math.pi) * 180)

    return 0, pitch, yaw, roll, pitch_degree, yaw_degree, roll_degree


def get_pose_estimation_in_euler_angle(landmark_shape, im_szie):
    try:
        ret, image_points = get_image_points_from_landmark_shape(landmark_shape)
        if ret != 0:
            print('get_image_points failed')
            return -1, None, None, None

        ret, rotation_vector, translation_vector, camera_matrix, dist_coeffs = get_pose_estimation(im_szie,
                                                                                                   image_points)
        if ret != True:
            print('get_pose_estimation failed')
            return -1, None, None, None

        ret, pitch, yaw, roll = get_euler_angle(rotation_vector)
        if ret != 0:
            print('get_euler_angle failed')
            return -1, None, None, None

        euler_angle_str = 'Pitch:{}, Yaw:{}, Roll:{}'.format(pitch, yaw, roll)
        print(euler_angle_str)
        return 0, pitch, yaw, roll

    except Exception as e:
        print('get_pose_estimation_in_euler_angle exception:{}'.format(e))
        return -1, None, None, None


def build_img_text_marge(img_, text, height):
    """
    @Time    :   2023/06/01 05:29:09
    @Author  :   liruilonger@gmail.com
    @Version :   1.0
    @Desc    :   生成文字圖片拼接到 img 對象
                 Args:

                 Returns:
                   void
    """
    import cv2
    from PIL import Image, ImageDraw, ImageFont

    # 定義圖片大小和背景顏色
    width = img_.shape[1]
    background_color = (255, 255, 255)

    # 定義字體、字號和顏色
    font_path = 'arial.ttf'
    font_size = 26
    font_color = (0, 0, 0)

    # 創(chuàng)建空白圖片
    image = Image.new('RGB', (width, height), background_color)

    # 創(chuàng)建畫筆
    draw = ImageDraw.Draw(image)

    # 加載字體
    font = ImageFont.truetype(font_path, font_size)

    # 寫入文字
    text_width, text_height = draw.textsize(text, font)
    text_x = (width - text_width) // 2
    text_y = (height - text_height) // 2
    draw.text((text_x, text_y), text, font=font, fill=font_color)

    # 將Pillow圖片轉(zhuǎn)換為OpenCV圖片
    image_cv = cv2.cvtColor(np.array(image), cv2.COLOR_RGB2BGR)

    montage_size = (width, img_.shape[0])
    import imutils
    montages = imutils.build_montages([img_, image_cv], montage_size, (1, 2))

    # 保存圖片
    return montages[0]


if __name__ == '__main__':
    from imutils import paths

    # for imagePath in paths.list_images("W:\\python_code\\deepface\\huge_1.jpg"):
    for imagePath in range(1):
        print(f"處理的圖片路徑為: {imagePath}")
        # Read Image
        im = cv2.imread("image.jpg")
        size = im.shape
        # 對圖像進(jìn)行縮放的操作
        if size[0] > 700:
            h = size[0] / 3
            w = size[1] / 3
            # 如果圖像的高度大于700,就將其高度和寬度分別縮小為原來的1/3,然后使用雙三次插值的方法進(jìn)行縮放。最后返回縮放后的圖像的大小。
            im = cv2.resize(im, (int(w), int(h)), interpolation=cv2.INTER_CUBIC)
            size = im.shape
        # 獲取坐標(biāo)點(diǎn)    
        ret, image_points = get_image_points(im)
        if ret != 0:
            print('get_image_points failed')
            continue

        ret, rotation_vector, translation_vector, camera_matrix, dist_coeffs = get_pose_estimation(size, image_points)

        if ret != True:
            print('get_pose_estimation failed')
            continue
        draw_annotation_box(im, rotation_vector, translation_vector, camera_matrix, dist_coeffs)
        cv2.imwrite('new_' + "draw_annotation_box.jpg", im)

        ret, pitch, yaw, roll, pitch_degree, yaw_degree, roll_degree = get_euler_angle(rotation_vector)

        draw = im.copy()
        # Yaw:

        if yaw_degree < 0:
            output_yaw = "left : " + str(abs(yaw_degree)) + " degrees"
        elif yaw_degree > 0:
            output_yaw = "right :" + str(abs(yaw_degree)) + " degrees"
        else:
            output_yaw = "No left or right"
        print(output_yaw)

        # Pitch:
        if pitch_degree > 0:
            output_pitch = "dow :" + str(abs(pitch_degree)) + " degrees"
        elif pitch_degree < 0:
            output_pitch = "up :" + str(abs(pitch_degree)) + " degrees"
        else:
            output_pitch = "No downwards or upwards"
        print(output_pitch)

        # Roll:
        if roll_degree < 0:
            output_roll = "bends to the right: " + str(abs(roll_degree)) + " degrees"
        elif roll_degree > 0:
            output_roll = "bends to the left: " + str(abs(roll_degree)) + " degrees"
        else:
            output_roll = "No bend  right or left."
        print(output_roll)

        # Initial status:
        if abs(yaw) < 0.00001 and abs(pitch) < 0.00001 and abs(roll) < 0.00001:
            cv2.putText(draw, "Initial ststus", (20, 40), cv2.FONT_HERSHEY_SIMPLEX, .5, (0, 255, 0))
            print("Initial ststus")

        # 姿態(tài)檢測完的數(shù)據(jù)寫在對應(yīng)的照片
        imgss = build_img_text_marge(im, output_yaw + "\n" + output_pitch + "\n" + output_roll, 200)
        cv2.imwrite('new_' + str(uuid.uuid4()).replace('-', '') + ".jpg", imgss)

博文部分內(nèi)容參考

? 文中涉及參考鏈接內(nèi)容版權(quán)歸原作者所有,如有侵權(quán)請告知,這是一個(gè)開源項(xiàng)目,如果你認(rèn)可它,不要吝嗇星星哦 ??


https://blog.csdn.net/zhang2gongzi/article/details/124520896

https://github.com/JuneoXIE/

https://github.com/yinguobing/head-pose-estimation


? 2018-2023 liruilonger@gmail.com, All rights reserved. 保持署名-非商用-相同方式共享(CC BY-NC-SA 4.0)文章來源地址http://www.zghlxwxcb.cn/news/detail-523015.html

到了這里,關(guān)于基于OpenCV 和 Dlib 進(jìn)行頭部姿態(tài)估計(jì)的文章就介紹完了。如果您還想了解更多內(nèi)容,請?jiān)谟疑辖撬阉鱐OY模板網(wǎng)以前的文章或繼續(xù)瀏覽下面的相關(guān)文章,希望大家以后多多支持TOY模板網(wǎng)!

本文來自互聯(lián)網(wǎng)用戶投稿,該文觀點(diǎn)僅代表作者本人,不代表本站立場。本站僅提供信息存儲(chǔ)空間服務(wù),不擁有所有權(quán),不承擔(dān)相關(guān)法律責(zé)任。如若轉(zhuǎn)載,請注明出處: 如若內(nèi)容造成侵權(quán)/違法違規(guī)/事實(shí)不符,請點(diǎn)擊違法舉報(bào)進(jìn)行投訴反饋,一經(jīng)查實(shí),立即刪除!

領(lǐng)支付寶紅包贊助服務(wù)器費(fèi)用

相關(guān)文章

  • OpenCV實(shí)戰(zhàn)(24)——相機(jī)姿態(tài)估計(jì)

    校準(zhǔn)相機(jī)后,就可以將捕獲的圖像與物理世界聯(lián)系起來。如果物體的 3D 結(jié)構(gòu)是已知的,那么就可以預(yù)測物體如何投影到相機(jī)的傳感器上,圖像形成的過程由投影方程描述。當(dāng)方程的大部分項(xiàng)已知時(shí),就可以通過觀察一些圖像來推斷其他元素 ( 2D 或 3D ) 的值。相機(jī)姿態(tài)估計(jì)就是

    2024年02月05日
    瀏覽(30)
  • Opencv之Aruco碼的檢測和姿態(tài)估計(jì)

    Opencv之Aruco碼的檢測和姿態(tài)估計(jì)

    Aruco碼是由寬黑色邊框和確定其標(biāo)識符(id)的內(nèi)部二進(jìn)制矩陣組成的正方形標(biāo)記。它的黑色邊框有助于其在圖像中的快速檢測,內(nèi)部二進(jìn)制編碼用于識別標(biāo)記和提供錯(cuò)誤檢測和糾正。單個(gè)aruco 標(biāo)記就可以提供足夠的對應(yīng)關(guān)系,例如有四個(gè)明顯的角點(diǎn)及內(nèi)部的二進(jìn)制編碼,所以

    2024年02月02日
    瀏覽(25)
  • OpenCV與AI深度學(xué)習(xí) | 使用單相機(jī)對已知物體進(jìn)行3D位置估計(jì)

    OpenCV與AI深度學(xué)習(xí) | 使用單相機(jī)對已知物體進(jìn)行3D位置估計(jì)

    本文來源公眾號“ OpenCV與AI深度學(xué)習(xí) ”,僅用于學(xué)術(shù)分享,侵權(quán)刪,干貨滿滿。 原文鏈接:使用單相機(jī)對已知物體進(jìn)行3D位置估計(jì) ????????本文主要介紹如何使用單個(gè)相機(jī)對已知物體進(jìn)行3D位置估計(jì),并給出實(shí)現(xiàn)步驟。?? ????????在計(jì)算機(jī)視覺中,有很多方法可以找

    2024年03月15日
    瀏覽(26)
  • Python+OpenCV+OpenPose實(shí)現(xiàn)人體姿態(tài)估計(jì)(人體關(guān)鍵點(diǎn)檢測)

    Python+OpenCV+OpenPose實(shí)現(xiàn)人體姿態(tài)估計(jì)(人體關(guān)鍵點(diǎn)檢測)

    1、人體姿態(tài)估計(jì)簡介 2、人體姿態(tài)估計(jì)數(shù)據(jù)集 3、OpenPose庫 4、實(shí)現(xiàn)原理 5、實(shí)現(xiàn)神經(jīng)網(wǎng)絡(luò) 6、實(shí)現(xiàn)代碼 人體姿態(tài)估計(jì)(Human Posture Estimation),是通過將圖片中已檢測到的人體關(guān)鍵點(diǎn)正確的聯(lián)系起來,從而估計(jì)人體姿態(tài)。 人體關(guān)鍵點(diǎn)通常對應(yīng)人體上有一定自由度的關(guān)節(jié),比如頸、

    2024年02月04日
    瀏覽(23)
  • 簡要介紹 | 基于深度學(xué)習(xí)的姿態(tài)估計(jì)技術(shù)

    簡要介紹 | 基于深度學(xué)習(xí)的姿態(tài)估計(jì)技術(shù)

    注1:本文系“簡要介紹”系列之一,僅從概念上對基于深度學(xué)習(xí)的姿態(tài)估計(jì)技術(shù)進(jìn)行非常簡要的介紹,不適合用于深入和詳細(xì)的了解。 注2:\\\"簡要介紹\\\"系列的所有創(chuàng)作均使用了AIGC工具輔助 姿態(tài)估計(jì) 是計(jì)算機(jī)視覺領(lǐng)域的一個(gè)重要研究方向,它主要關(guān)注如何從圖像或視頻中提

    2024年02月09日
    瀏覽(25)
  • 基于 pytorch-openpose 實(shí)現(xiàn) “多目標(biāo)” 人體姿態(tài)估計(jì)

    基于 pytorch-openpose 實(shí)現(xiàn) “多目標(biāo)” 人體姿態(tài)估計(jì)

    還記得上次通過 MediaPipe 估計(jì)人體姿態(tài)關(guān)鍵點(diǎn)驅(qū)動(dòng) 3D 角色模型,雖然節(jié)省了動(dòng)作 K 幀時(shí)間,但是網(wǎng)上還有一種似乎更方便的方法。MagicAnimate 就是其一,說是只要提供一張人物圖片和一段動(dòng)作視頻 (舞蹈武術(shù)等),就可以完成圖片人物轉(zhuǎn)視頻。 于是我就去官網(wǎng)體驗(yàn)了一下,發(fā)現(xiàn)

    2024年01月25日
    瀏覽(17)
  • 基于EKF的四旋翼無人機(jī)姿態(tài)估計(jì)matlab仿真

    基于EKF的四旋翼無人機(jī)姿態(tài)估計(jì)matlab仿真

    目錄 1.算法描述 2.仿真效果預(yù)覽 3.MATLAB核心程序 4.完整MATLAB ? ? ? ?卡爾曼濾波是一種高效率的遞歸濾波器(自回歸濾波器),它能夠從一系列的不完全包含噪聲的測量中,估計(jì)動(dòng)態(tài)系統(tǒng)的狀態(tài)。這種濾波方法以它的發(fā)明者魯?shù)婪颉·卡爾曼(Rudolf E. Kalman)命名??柭畛跆?/p>

    2023年04月23日
    瀏覽(358)
  • PoseFormer:基于視頻的2D-to-3D單人姿態(tài)估計(jì)

    PoseFormer:基于視頻的2D-to-3D單人姿態(tài)估計(jì)

    論文鏈接:3D Human Pose Estimation with Spatial and Temporal Transformers 論文代碼:https://github.com/zczcwh/PoseFormer 論文出處:2021 ICCV 論文單位:University of Central Florida, USA Transformer架構(gòu)已經(jīng)成為自然語言處理中的首選模型,現(xiàn)在正被引入到計(jì)算機(jī)視覺任務(wù)中,例如圖像分類、對象檢測和語義

    2024年02月04日
    瀏覽(71)
  • 基于OpenCV和Dlib的深度學(xué)習(xí)人臉識別技術(shù)實(shí)踐與應(yīng)用

    基于OpenCV和Dlib的深度學(xué)習(xí)人臉識別技術(shù)實(shí)踐與應(yīng)用

    計(jì)算機(jī)視覺技術(shù)在當(dāng)前人工智能發(fā)展進(jìn)程中已然達(dá)到較高成熟度,一系列基礎(chǔ)算法與應(yīng)用場景獲得廣泛實(shí)踐與驗(yàn)證。在算法層面,圖像處理、目標(biāo)檢測、語義分割等多個(gè)領(lǐng)域的技術(shù)不斷突破,準(zhǔn)確率與效率持續(xù)提升。在應(yīng)用上,人臉識別、車牌識別、醫(yī)學(xué)圖像分析等已步入商業(yè)化應(yīng)

    2024年02月03日
    瀏覽(24)
  • 人體姿態(tài)估計(jì)和手部姿態(tài)估計(jì)任務(wù)中神經(jīng)網(wǎng)絡(luò)的選擇

    一、 人體姿態(tài)估計(jì) 任務(wù)適合使用 卷積神經(jīng)網(wǎng)絡(luò)(CNN) 來解決。 ????????人體姿態(tài)估計(jì)任務(wù)的目標(biāo)是從給定的圖像或視頻中推斷出人體的關(guān)節(jié)位置和姿勢。這是一個(gè)具有挑戰(zhàn)性的計(jì)算機(jī)視覺任務(wù),而CNN在處理圖像數(shù)據(jù)方面表現(xiàn)出色。 ????????使用CNN進(jìn)行人體姿態(tài)估計(jì)

    2024年02月05日
    瀏覽(25)

覺得文章有用就打賞一下文章作者

支付寶掃一掃打賞

博客贊助

微信掃一掃打賞

請作者喝杯咖啡吧~博客贊助

支付寶掃一掃領(lǐng)取紅包,優(yōu)惠每天領(lǐng)

二維碼1

領(lǐng)取紅包

二維碼2

領(lǐng)紅包