1 前言
??在Ubuntu18.04實現(xiàn)的一個人眼監(jiān)測小程序,使用Qt5.14、Dlib19.24、Opencv3.4.16實現(xiàn)。
??其主要實現(xiàn)思想是,首先通過Opencv
獲取攝像頭數(shù)據(jù),然后通過Dlib
提取人臉68關鍵點的算法對所輸入圖片中的人臉進行關鍵點檢測,檢測出眼部的關鍵點后,在用Opnecv
中的畫圖命令進行繪制,再利用Qt
進行界面展示。
2 效果
??人眼監(jiān)測小程序效果如下圖所示,將畫面中的人眼處用綠色的線進行框選。
3 Ubuntu18.04下Qt、Opencv、Dlib的配置
3.0 Ubuntu18.04的安裝以及一些基本配置
??U此部分涉及到buntu18.04系統(tǒng)的安裝、配置一些基礎的應用軟件等過程。
??(1)引導盤的制作;(2)安裝ubuntu18.04系統(tǒng);(3)安裝搜狗輸入法;(4)安裝百度網(wǎng)盤;(5)安裝Chrome瀏覽器;(6)安裝激活Pycharm;(7)安裝VScode;
??整體的大概流程參照這篇博客https://blog.csdn.net/wang_chao118/article/details/130146392?spm=1001.2014.3001.5502
3.1 Qt
??從Qt官網(wǎng)下載鏈接https://download.qt.io/archive/qt/處下載所對應版本,我這里直接下載的是https://download.qt.io/archive/qt/5.14/5.14.2/這個頁面下的這個qt-opensource-linux-x64-5.14.2.run
文件.下載下來后直接命令行運行,按照步驟走即可,注意一下安裝Qt的具體位置.
3.2 Opencv
??從github上下載Opencv的源碼進行編譯,我這次沒下新版本,因為之前有用過這個Opnecv3.4.16,所以就下了這個https://github.com/opencv/opencv/tree/3.4.16版本,沒有什么特別的用意,如果只是做一些小demo,僅僅用到一些opencv的基礎操作如讀取圖片等操作,下哪個版本的都可以.
??在用到cmake-gui配置編譯選項的時候,勾選編譯opencv_world,這樣的話所有的庫文件全部打包成一個,可以簡便pro文件的寫法,也不用去查某個函數(shù)到底對應哪個庫,這個libopencv_world.so.3.4.16也就20多MB吧.(此處是相對而言的,也可以不編譯為一個庫)
3.3 Dlib
??從github上下載Dlib的源碼https://github.com/davisking/dlib進行編譯,過程與編譯opencv時類似.
4 核心代碼
??本項目下的文件層級如下圖所示.文章來源:http://www.zghlxwxcb.cn/news/detail-412576.html
4.1 pro文件
QT += core gui
greaterThan(QT_MAJOR_VERSION, 4): QT += widgets
CONFIG += c++14
# You can make your code fail to compile if it uses deprecated APIs.
# In order to do so, uncomment the following line.
#DEFINES += QT_DISABLE_DEPRECATED_BEFORE=0x060000 # disables all the APIs deprecated before Qt 6.0.0
SOURCES += \
camera.cpp \
fatiguedetect.cpp \
main.cpp \
widget.cpp
HEADERS += \
camera.h \
fatiguedetect.h \
widget.h
FORMS += \
widget.ui
#opencv庫的路徑opencv3.4.16
INCLUDEPATH += /home/ai/software/opencv3416_installed/include
LIBS += /home/ai/software/opencv3416_installed/lib/libopencv_world.so
# dlib19.24庫的路徑
INCLUDEPATH += /home/ai/software/dlib_installed/include
LIBS += /home/ai/software/dlib_installed/lib/libdlib.a
# Default rules for deployment.
qnx: target.path = /tmp/$${TARGET}/bin
else: unix:!android: target.path = /opt/$${TARGET}/bin
!isEmpty(target.path): INSTALLS += target
DISTFILES +=
4.2 Widget.cpp
#include "widget.h"
#include "ui_widget.h"
#include <iostream>
Widget::Widget(QWidget *parent)
: QWidget(parent)
, ui(new Ui::Widget)
{
ui->setupUi(this);
camera_thread = new Camera();
camera_thread->set_cam_number(0);
connect(camera_thread, SIGNAL(sendPicture(QImage)),this,SLOT(receive(QImage)));
camera_thread->open_camera();
}
Widget::~Widget()
{
delete ui;
}
void Widget::receive(QImage img)
{
QImage scaled_img = img.scaled(ui->label->width(), ui->label->height());
ui->label->setPixmap(QPixmap::fromImage(scaled_img));
}
4.3 fatiguedetect.cpp
#include "fatiguedetect.h"
// 常量定義
const double close_standard = 0.45;
const int eye_l_len = 60;
// 判斷狀態(tài)的類
class JudgeCondition : public QObject {
Q_OBJECT
public:
explicit JudgeCondition(QObject *parent = nullptr) : QObject(parent), condition(0), heavy_flag(0) {};
void judge(std::deque<int> &left_eye_l, std::deque<int> &right_eye_l)
{
int left_sum = std::accumulate(left_eye_l.begin(), left_eye_l.end(), 0);
int right_sum = std::accumulate(right_eye_l.begin(), right_eye_l.end(), 0);
if (left_sum == eye_l_len && right_sum == eye_l_len) {
++heavy_flag;
if (heavy_flag > 100) {
condition = 3;
} else {
condition = 2;
}
} else {
heavy_flag = 0;
condition = 1;
}
emit sendCondition(condition);
};
signals:
void sendCondition(int);
private:
int condition; // 狀態(tài):0=圖像缺失 1=清醒 2=疲勞 3=重度疲勞
int heavy_flag; // 重度疲勞標志位
};
FatigueDetect::FatigueDetect(QObject *parent) : QObject(parent),
noImageFlag(0),
noImage(0)
{
cout << "[INFO] loading facial landmark predictor..." << endl;
dlib::deserialize("./model.dat") >> predictor; //是直接調(diào)試運行,則在build文件夾內(nèi);如果為release則與.exe在同一文件夾內(nèi)
detector = dlib::get_frontal_face_detector();
}
cv::Mat FatigueDetect::detect(cv::Mat frame)
{
std::vector<std::vector<cv::Point>> pos;
cv::Mat resized_frame;
cv::resize(frame, resized_frame, cv::Size(720, 720 * frame.rows / frame.cols));
cv::Mat gray;
// cv::cvtColor(resized_frame, gray, cv::COLOR_BGR2GRAY);
cv_image<bgr_pixel> dlib_img(resized_frame);
std::vector<dlib::rectangle> rects = detector(dlib_img, 0); //這里放入detector的必須是dlib類型參數(shù),不能是cv::Mat,不報錯,但編譯時出“問題”
if (rects.size() == 0)
{
noImage = 1;
emit noSignal(noImageFlag);
}
for (dlib::rectangle rect : rects) //for(類型 變量名稱 : 容器/數(shù)組等內(nèi)容)
{
noImage = 0;
dlib::full_object_detection shape = predictor(dlib_img, rect);
std::vector<cv::Point> pos_;
for (int i = 36; i <= 41; i++) //左眼關鍵點標識
{
cv::Point point(shape.part(i).x(), shape.part(i).y());
pos_.push_back(point);
}
for (int i = 42; i <= 47; i++) //右眼關鍵點標識
{
cv::Point point(shape.part(i).x(), shape.part(i).y());
pos_.push_back(point);
}
pos.push_back(pos_);
draw_point_and_line(pos_, resized_frame);
}
cv::Mat resized_back_frame;
cv::resize(resized_frame, resized_back_frame, cv::Size(frame.cols, frame.rows));
return resized_back_frame;
}
void FatigueDetect::draw_point_and_line(std::vector<cv::Point> pos_, cv::Mat &image)
{
int linewidth = 1;
for (int i = 0; i < 5; i++)
{
cv::line(image, pos_[i], pos_[i + 1], cv::Scalar(0, 255, 0), linewidth);
}
cv::line(image, pos_[0], pos_[5], cv::Scalar(0, 255, 0), linewidth);
for (int i = 6; i < 11; i++)
{
cv::line(image, pos_[i], pos_[i + 1], cv::Scalar(0, 255, 0), linewidth);
}
cv::line(image, pos_[6], pos_[11], cv::Scalar(0, 255, 0), linewidth);
for (cv::Point point : pos_)
{
cv::circle(image, point, linewidth, cv::Scalar(255, 0, 0), -1);
}
}
5 資源下載
本案例中涉及到的工程文件到此處下載https://download.csdn.net/download/wang_chao118/87700907。文章來源地址http://www.zghlxwxcb.cn/news/detail-412576.html
到了這里,關于Ubunutu18.04+Qt5.14+Dlib19.24+Opencv3.4.16實時人眼監(jiān)測實驗的文章就介紹完了。如果您還想了解更多內(nèi)容,請在右上角搜索TOY模板網(wǎng)以前的文章或繼續(xù)瀏覽下面的相關文章,希望大家以后多多支持TOY模板網(wǎng)!