利用opencv 做一个疲劳检测系统(2)

article/2025/9/23 20:17:42

文章目录

  • 杂谈
  • 实现步骤
  • 核心算法
  • 交互界面
  • 界面代码
  • 检测效果
  • 源代码

杂谈

最近发现视力下降严重, 可能跟我的过度用眼有关,于是想着能不能做一个检测用眼疲劳的,灵感来自特斯拉的疲劳检测系统。
效果如下:
在这里插入图片描述

实现步骤

  1. 实现核心算法
  2. 制作交互界面
  3. 设计交互逻辑

核心算法

疲劳检测算法讲解:
利用dlib 人脸检测算法来捕获人脸的关键点数(68个关键点)
参考文章:https://blog.csdn.net/monk96/article/details/127751414?spm=1001.2014.3001.5502

获取眼睛和嘴巴的点位置
在这里插入图片描述
在这里插入图片描述

眼睛疲劳计算公式
利用欧拉距离计算
dist = (||P2 - P6|| + ||P3 - P5||)/ 2 * ||P1 - P4||

对应就是上下距离 与左右的比值, 然后我们设定一个阈值,比如说0.3, 另外设置帧数,如3帧,超过3帧则 检测为闭眼, 在闭眼总数上加1,如果闭眼次数超过设定的阈值(6次),判断为疲劳状态。
哈欠疲劳计算公式

哈欠用于眼睛相同的计算方式来计算打哈欠, 同样设置阈值和帧数, 不同点是在于哈欠是设定为0.8。
tips: 这里需要设置多少时间内没闭眼,这去除计数器,不然长时间的检测,肯定会超过阈值

这里需要拿到眼睛的位置进行计算,引入欧拉距离工具

from scipy.spatial import distance as dist 
from collections import OrderedDict

设定点位(固定的)

self.LANDMARKS = OrderedDict([("mouth", (48, 68)),("right_eyebrow", (17, 22)),("left_eyebrow", (22, 27)),("right_eye", (36, 42)),("left_eye", (42, 48)),("nose", (27, 36)),("jaw", (0, 17))])
self.EYE_THRESH = 0.3self.EYE_FRAMES = 3self.COUNTER_FRAMES = 0self.TOTAL = 0self.MOUSE_UP_FRAMES = 5self.MOUSE_COUNTER_FRAMES = 0self.MOUSE_RATE = 0.8(self.lStart, self.lEnd) = self.LANDMARKS['left_eye'](self.rStart, self.rEnd) = self.LANDMARKS['right_eye'](self.mStart, self.mEnd) = self.LANDMARKS['mouth']

计算大小 这里计算欧拉距离,然后再把距离进行平均,减少误差

def eye_aspect_ratio(self, eye):A = dist.euclidean(eye[1], eye[5])B = dist.euclidean(eye[2], eye[4])C = dist.euclidean(eye[0], eye[3])return (A + B) / (2 * C)
def cal_height(self, points):leftEye = points[self.lStart: self.lEnd]rightEye = points[self.rStart: self.rEnd]leftEAR = self.eye_aspect_ratio(leftEye)rightEAR = self.eye_aspect_ratio(rightEye)return (leftEAR + rightEAR)/2

设定检测的方法:这里用来每一帧检测图片,并返回信息给到交互界面

def skim_video(self, img, ha, eye, warn):img_gray = cv2.cvtColor(img, cv2.COLOR_RGB2GRAY)# 人脸数rectsrects = self.detector(img_gray, 0)close_eye = Falsefor i in range(len(rects)):faces = self.predictor(img, rects[i]).parts()points = np.matrix([[p.x, p.y] for p in faces])rate = self.cal_height(points)  # 闭眼rate_mouse = self.cal_mouse_height(points) # 哈欠if rate_mouse > self.MOUSE_RATE and ha:self.MOUSE_COUNTER_FRAMES += 1if self.MOUSE_COUNTER_FRAMES >= self.MOUSE_UP_FRAMES:print('打哈欠')cv2.putText(img, "haha", (rects[i].left(), rects[i].top() - 60), cv2.FONT_HERSHEY_SIMPLEX, 0.8, (0, 255, 255))   if rate < self.EYE_THRESH and eye:self.COUNTER_FRAMES += 1# print('闭眼检测到了,第%s次'%COUNTER_FRAMES)if self.COUNTER_FRAMES >= 5:self.TOTAL += 1self.COUNTER_FRAMES = 0close_eye = Trueelse:self.COUNTER_FRAMES = 0# for idx, point in enumerate(points):#     pos = (point[0, 0], point[0, 1])#     cv2.circle(img, pos, 2, (0, 0, 255), 1)#     cv2.putText(img, str(idx + 1), pos, cv2.FONT_HERSHEY_SIMPLEX, 0.3, (0, 255, 255))  if self.TOTAL >= 10 and warn:cv2.rectangle(img, (rects[i].left(), rects[i].top()), (rects[i].right(), rects[i].bottom()), color= (255, 0, 255))cv2.putText(img, "tired", (rects[i].left(), rects[i].top() - 20), cv2.FONT_HERSHEY_SIMPLEX, 0.8, (0, 0, 255))   print('warning, 您已疲劳,请尽快休息')    return "%s闭眼"%time.strftime("%Y-%m-%d %H:%M:%S", time.localtime()) if close_eye else "", img

完整的数据处理类

import numpy as np
import dlib 
import cv2 
import sys
import time
sys.path.append("..")
from scipy.spatial import distance as dist 
from collections import OrderedDictclass Recognize():def __init__(self):self.init_data()self.init_model()def init_data(self):self.LANDMARKS = OrderedDict([("mouth", (48, 68)),("right_eyebrow", (17, 22)),("left_eyebrow", (22, 27)),("right_eye", (36, 42)),("left_eye", (42, 48)),("nose", (27, 36)),("jaw", (0, 17))])self.EYE_THRESH = 0.3self.EYE_FRAMES = 3self.COUNTER_FRAMES = 0self.TOTAL = 0self.MOUSE_UP_FRAMES = 5self.MOUSE_COUNTER_FRAMES = 0self.MOUSE_RATE = 0.8(self.lStart, self.lEnd) = self.LANDMARKS['left_eye'](self.rStart, self.rEnd) = self.LANDMARKS['right_eye'](self.mStart, self.mEnd) = self.LANDMARKS['mouth']def cal_height(self, points):leftEye = points[self.lStart: self.lEnd]rightEye = points[self.rStart: self.rEnd]leftEAR = self.eye_aspect_ratio(leftEye)rightEAR = self.eye_aspect_ratio(rightEye)return (leftEAR + rightEAR)/2def cal_mouse_height(self, points):mouse = points[self.mStart: self.mEnd]mouse_rate = self.mouse_aspect_ratio(mouse)return mouse_ratedef mouse_aspect_ratio(self, mouse):A = dist.euclidean(mouse[2],mouse[9])B = dist.euclidean(mouse[4],mouse[7])C = dist.euclidean(mouse[0],mouse[6])return (A+ B) / (2 * C)def eye_aspect_ratio(self, eye):A = dist.euclidean(eye[1], eye[5])B = dist.euclidean(eye[2], eye[4])C = dist.euclidean(eye[0], eye[3])return (A + B) / (2 * C)def init_model(self):self.detector = dlib.get_frontal_face_detector()self.predictor = dlib.shape_predictor('F:/python/ML/11-learn/tired/model_data/shape_predictor_68_face_landmarks.dat')def init_video_capture(self, method):if method == 0:self.capture = cv2.VideoCapture(0)else:self.capture = cv2.VideoCapture(method)def skim_video(self, img, ha, eye, warn):img_gray = cv2.cvtColor(img, cv2.COLOR_RGB2GRAY)# 人脸数rectsrects = self.detector(img_gray, 0)close_eye = Falsefor i in range(len(rects)):faces = self.predictor(img, rects[i]).parts()points = np.matrix([[p.x, p.y] for p in faces])rate = self.cal_height(points)  # 闭眼rate_mouse = self.cal_mouse_height(points) # 哈欠if rate_mouse > self.MOUSE_RATE and ha:self.MOUSE_COUNTER_FRAMES += 1if self.MOUSE_COUNTER_FRAMES >= self.MOUSE_UP_FRAMES:print('打哈欠')cv2.putText(img, "haha", (rects[i].left(), rects[i].top() - 60), cv2.FONT_HERSHEY_SIMPLEX, 0.8, (0, 255, 255))   if rate < self.EYE_THRESH and eye:self.COUNTER_FRAMES += 1# print('闭眼检测到了,第%s次'%COUNTER_FRAMES)if self.COUNTER_FRAMES >= 5:self.TOTAL += 1self.COUNTER_FRAMES = 0close_eye = Trueelse:self.COUNTER_FRAMES = 0# for idx, point in enumerate(points):#     pos = (point[0, 0], point[0, 1])#     cv2.circle(img, pos, 2, (0, 0, 255), 1)#     cv2.putText(img, str(idx + 1), pos, cv2.FONT_HERSHEY_SIMPLEX, 0.3, (0, 255, 255))  if self.TOTAL >= 10 and warn:cv2.rectangle(img, (rects[i].left(), rects[i].top()), (rects[i].right(), rects[i].bottom()), color= (255, 0, 255))cv2.putText(img, "tired", (rects[i].left(), rects[i].top() - 20), cv2.FONT_HERSHEY_SIMPLEX, 0.8, (0, 0, 255))   print('warning, 您已疲劳,请尽快休息')    return "%s闭眼"%time.strftime("%Y-%m-%d %H:%M:%S", time.localtime()) if close_eye else "", img
# print('结束检测,检测到了%s次疲劳闭眼'%TOTAL)

交互界面

交互界面利用qtdesigner, 可视化的设计, 再转化成python语言进行操作。

在这里插入图片描述

交互代码

# 控件绑定相关操作def init_slots(self):self.ui.tired_time.setValue(3)self.ui.tired_count.setValue(6)self.ui.eye.setChecked(True)self.ui.video.setChecked(True)self.ui.select_video.clicked.connect(self.button_video_open)self.ui.start_skim.clicked.connect(self.toggleState)self.ui.camera.clicked.connect(partial(self.change_method, METHOD.CAMERA))self.ui.video.clicked.connect(partial(self.change_method, METHOD.VIDEO))

暂停和开始: 这里利用了QtCore.QTimer() 的方法,里面有开始和暂停的api可以调用

import argparse
import random
import sys
import timesys.path.append("..")
from ui import detect
from logic.recognize import Recognize
import torch
from PyQt5.QtWidgets import *
from PyQt5 import QtCore, QtGui, QtWidgets
from PyQt5.QtWidgets import QApplication,QMainWindow
from functools import partial
import torch.backends.cudnn as cudnn
import cv2 as cv
import numpy as np
class METHOD():CAMERA = 0VIDEO = 1
# pyuic5 -o name.py test.ui
class UI_Logic_Window(QtWidgets.QMainWindow):def __init__(self, parent = None):super(UI_Logic_Window, self).__init__(parent)self.timer_video = QtCore.QTimer() # 创建定时器#创建一个窗口self.w = QMainWindow()self.ui = detect.Ui_DREAM_EYE()self.ui.setupUi(self)self.init_slots()self.output_folder = 'output/'self.cap = cv.VideoCapture()# 日志self.logging = ''self.recognize = Recognize()    # 控件绑定相关操作def init_slots(self):self.ui.tired_time.setValue(3)self.ui.tired_count.setValue(6)self.ui.eye.setChecked(True)self.ui.video.setChecked(True)self.ui.select_video.clicked.connect(self.button_video_open)self.ui.start_skim.clicked.connect(self.toggleState)self.ui.camera.clicked.connect(partial(self.change_method, METHOD.CAMERA))self.ui.video.clicked.connect(partial(self.change_method, METHOD.VIDEO))# self.ui.capScan.clicked.connect(self.button_camera_open)# self.ui.loadWeight.clicked.connect(self.open_model)# self.ui.initModel.clicked.connect(self.model_init)# self.ui.start_skim.clicked.connect(self.toggleState)# self.ui.end.clicked.connect(self.endVideo)#         # self.ui.pushButton_stop.clicked.connect(self.button_video_stop)#         # self.ui.pushButton_finish.clicked.connect(self.finish_detect)self.timer_video.timeout.connect(self.show_video_frame) # 定时器超时,将槽绑定至show_video_framedef change_method(self, type):if type == METHOD.CAMERA:self.ui.select_video.setDisabled(True)else:self.ui.select_video.setDisabled(False)def button_image_open(self):print('button_image_open')name_list = []try:img_name, _ = QtWidgets.QFileDialog.getOpenFileName(self, "选择文件")except OSError as reason:print('文件出错啦')QtWidgets.QMessageBox.warning(self, 'Warning', '文件出错', buttons=QtWidgets.QMessageBox.Ok)else:if not img_name:QtWidgets.QMessageBox.warning(self,"Warning", '文件出错', buttons=QtWidgets.QMessageBox.Ok)self.log('文件出错')else:img = cv.imread(img_name)info_show = self.recognize.skim_video(img)date = time.strftime('%Y-%m-%d-%H-%M-%S', time.localtime(time.time())) # 当前时间file_extaction = img_name.split('.')[-1]new_fileName = date + '.' + file_extactionfile_path = self.output_folder + 'img_output/' + new_fileNamecv.imwrite(file_path, img)self.show_img(info_show, img)#     self.log(info_show) #检测信息#     self.result = cv.cvtColor(img, cv.COLOR_BGR2BGRA)#     self.result =  letterbox(self.result, new_shape=self.opt.img_size)[0] #cv.resize(self.result, (640, 480), interpolation=cv.INTER_AREA)#     self.QtImg = QtGui.QImage(self.result.data, self.result.shape[1], self.result.shape[0], QtGui.QImage.Format_RGB32)#     print(type(self.ui.show))#     self.ui.show.setPixmap(QtGui.QPixmap.fromImage(self.QtImg))#     self.ui.show.setScaledContents(True) # 设置图像自适应界面大小def show_img(self, info_show, img):if info_show:self.log(info_show)show = cv.resize(img, (640, 480)) # 直接将原始img上的检测结果进行显示self.result = cv.cvtColor(show, cv.COLOR_BGR2RGB)showImage = QtGui.QImage(self.result.data, self.result.shape[1], self.result.shape[0],QtGui.QImage.Format_RGB888)self.ui.capture.setPixmap(QtGui.QPixmap.fromImage(showImage))self.ui.capture.setScaledContents(True)  # 设置图像自适应界面大小def toggleState(self):print('toggle')state = self.timer_video.signalsBlocked()self.timer_video.blockSignals(not state)text = '继续' if not state else '暂停'self.ui.start_skim.setText(text)def endVideo(self):print('end')self.timer_video.blockSignals(True)self.releaseRes()def button_video_open(self):video_path, _ = QtWidgets.QFileDialog.getOpenFileName(self, '选择检测视频', './', filter="*.mp4;;*.avi;;All Files(*)")self.ui.video_path.setText(video_path)flag = self.cap.open(video_path)if not flag:QtWidgets.QMessageBox.warning(self,"Warning", '打开视频失败', buttons=QtWidgets.QMessageBox.Ok)else: self.timer_video.start(1000/self.cap.get(cv.CAP_PROP_FPS)) # 以30ms为间隔,启动或重启定时器# if self.opt.save:#         fps, w, h, path = self.set_video_name_and_path()#         self.vid_writer = cv.VideoWriter(path, cv.VideoWriter_fourcc(*'mp4v'), fps, (w, h))def set_video_name_and_path(self):# 获取当前系统时间,作为img和video的文件名now = time.strftime("%Y-%m-%d-%H-%M-%S", time.localtime(time.time()))# if vid_cap:  # videofps = self.cap.get(cv.CAP_PROP_FPS)w = int(self.cap.get(cv.CAP_PROP_FRAME_WIDTH))h = int(self.cap.get(cv.CAP_PROP_FRAME_HEIGHT))# 视频检测结果存储位置save_path = self.output_folder + 'video/' + now + '.mp4'return fps, w, h, save_pathdef button_camera_open(self):camera_num = 0self.cap = cv.VideoCapture(camera_num)if not self.cap.isOpened():QtWidgets.QMessageBox.warning(self, u"Warning", u'摄像头打开失败', buttons=QtWidgets.QMessageBox.Ok)else:self.timer_video.start(1000/60)if self.opt.save:fps, w, h, path = self.set_video_name_and_path()self.vid_writer = cv.VideoWriter(path, cv.VideoWriter_fourcc(*'mp4v'), fps, (w, h))def open_model(self):self.openfile_name_model, _ = QFileDialog.getOpenFileName(self, '选择权重文件', directory='./yolov5\yolo\YoloV5_PyQt5-main\weights')print(self.openfile_name_model)if not self.openfile_name_model:#    QtWidgets.QMessageBox.warning(self, u"Warning" u'未选择权重文件,请重试', buttons=QtWidgets.QMessageBox.Ok)self.log("warining 未选择权重文件,请重试")else :print(self.openfile_name_model)self.log("权重文件路径为:%s"%self.openfile_name_model)passdef show_video_frame(self):name_list = []flag, img = self.cap.read()if img is None:self.releaseRes() else:close_eye, img = self.recognize.skim_video(img, self.ui.ha.checkState(), self.ui.eye.checkState(), self.ui.tired.checkState())# if self.opt.save:#         self.vid_writer.write(img) # 检测结果写入视频self.show_img(close_eye, img)def releaseRes(self):print('读取结束')self.log('检测结束')self.timer_video.stop()self.cap.release() # 释放video_capture资源self.ui.show.clear()if self.opt.save:self.vid_writer.release()               def log(self, msg):self.logging += '%s\n'%msgself.ui.log.setText(self.logging)self.ui.log.moveCursor(QtGui.QTextCursor.End)
if __name__=='__main__':# 创建QApplication实例app=QApplication(sys.argv)#获取命令行参数current_ui = UI_Logic_Window()current_ui.show()sys.exit(app.exec_())

界面代码

# -*- coding: utf-8 -*-# Form implementation generated from reading ui file 'detect_ui.ui'
#
# Created by: PyQt5 UI code generator 5.9.2
#
# WARNING! All changes made in this file will be lost!from PyQt5 import QtCore, QtGui, QtWidgetsclass Ui_DREAM_EYE(object):def setupUi(self, DREAM_EYE):DREAM_EYE.setObjectName("DREAM_EYE")DREAM_EYE.resize(936, 636)icon = QtGui.QIcon()icon.addPixmap(QtGui.QPixmap("../../ui_img/icon.jpg"), QtGui.QIcon.Normal, QtGui.QIcon.Off)DREAM_EYE.setWindowIcon(icon)self.centralwidget = QtWidgets.QWidget(DREAM_EYE)self.centralwidget.setEnabled(True)sizePolicy = QtWidgets.QSizePolicy(QtWidgets.QSizePolicy.Preferred, QtWidgets.QSizePolicy.Fixed)sizePolicy.setHorizontalStretch(0)sizePolicy.setVerticalStretch(0)sizePolicy.setHeightForWidth(self.centralwidget.sizePolicy().hasHeightForWidth())self.centralwidget.setSizePolicy(sizePolicy)self.centralwidget.setObjectName("centralwidget")self.groupBox = QtWidgets.QGroupBox(self.centralwidget)self.groupBox.setGeometry(QtCore.QRect(640, 10, 311, 621))font = QtGui.QFont()font.setFamily("Microsoft YaHei")font.setPointSize(11)self.groupBox.setFont(font)self.groupBox.setObjectName("groupBox")self.groupBox_2 = QtWidgets.QGroupBox(self.groupBox)self.groupBox_2.setGeometry(QtCore.QRect(10, 270, 271, 151))self.groupBox_2.setObjectName("groupBox_2")self.camera = QtWidgets.QRadioButton(self.groupBox_2)self.camera.setGeometry(QtCore.QRect(20, 30, 89, 16))self.camera.setObjectName("camera")self.video = QtWidgets.QRadioButton(self.groupBox_2)self.video.setGeometry(QtCore.QRect(140, 30, 89, 16))self.video.setObjectName("video")self.label = QtWidgets.QLabel(self.groupBox_2)self.label.setGeometry(QtCore.QRect(20, 60, 81, 31))self.label.setObjectName("label")self.select_video = QtWidgets.QPushButton(self.groupBox_2)self.select_video.setGeometry(QtCore.QRect(30, 110, 81, 31))self.select_video.setObjectName("select_video")self.start_skim = QtWidgets.QPushButton(self.groupBox_2)self.start_skim.setGeometry(QtCore.QRect(150, 110, 81, 31))self.start_skim.setObjectName("start_skim")self.video_path = QtWidgets.QTextEdit(self.groupBox_2)self.video_path.setGeometry(QtCore.QRect(110, 60, 151, 31))self.video_path.setObjectName("video_path")self.groupBox_3 = QtWidgets.QGroupBox(self.groupBox)self.groupBox_3.setGeometry(QtCore.QRect(10, 30, 271, 111))self.groupBox_3.setObjectName("groupBox_3")self.eye = QtWidgets.QCheckBox(self.groupBox_3)self.eye.setGeometry(QtCore.QRect(20, 30, 91, 21))self.eye.setObjectName("eye")self.ha = QtWidgets.QCheckBox(self.groupBox_3)self.ha.setGeometry(QtCore.QRect(150, 30, 91, 21))self.ha.setObjectName("ha")self.head = QtWidgets.QCheckBox(self.groupBox_3)self.head.setGeometry(QtCore.QRect(20, 70, 91, 21))self.head.setObjectName("head")self.tired = QtWidgets.QCheckBox(self.groupBox_3)self.tired.setGeometry(QtCore.QRect(150, 70, 91, 21))self.tired.setObjectName("tired")self.groupBox_5 = QtWidgets.QGroupBox(self.groupBox)self.groupBox_5.setGeometry(QtCore.QRect(10, 430, 271, 181))self.groupBox_5.setObjectName("groupBox_5")self.log = QtWidgets.QTextBrowser(self.groupBox_5)self.log.setGeometry(QtCore.QRect(10, 30, 251, 141))self.log.setObjectName("log")self.groupBox_4 = QtWidgets.QGroupBox(self.groupBox)self.groupBox_4.setGeometry(QtCore.QRect(10, 150, 271, 111))self.groupBox_4.setObjectName("groupBox_4")self.label_3 = QtWidgets.QLabel(self.groupBox_4)self.label_3.setGeometry(QtCore.QRect(20, 30, 81, 21))self.label_3.setObjectName("label_3")self.tired_time = QtWidgets.QSpinBox(self.groupBox_4)self.tired_time.setGeometry(QtCore.QRect(100, 30, 42, 22))self.tired_time.setObjectName("tired_time")self.label_4 = QtWidgets.QLabel(self.groupBox_4)self.label_4.setGeometry(QtCore.QRect(20, 70, 81, 21))self.label_4.setObjectName("label_4")self.tired_count = QtWidgets.QSpinBox(self.groupBox_4)self.tired_count.setGeometry(QtCore.QRect(100, 70, 42, 22))self.tired_count.setObjectName("tired_count")self.capture = QtWidgets.QLabel(self.centralwidget)self.capture.setGeometry(QtCore.QRect(10, 10, 611, 611))self.capture.setText("")self.capture.setObjectName("capture")DREAM_EYE.setCentralWidget(self.centralwidget)self.retranslateUi(DREAM_EYE)QtCore.QMetaObject.connectSlotsByName(DREAM_EYE)def retranslateUi(self, DREAM_EYE):_translate = QtCore.QCoreApplication.translateDREAM_EYE.setWindowTitle(_translate("DREAM_EYE", "疲劳检测系统"))self.groupBox.setTitle(_translate("DREAM_EYE", "参数设置"))self.groupBox_2.setTitle(_translate("DREAM_EYE", "视频源"))self.camera.setText(_translate("DREAM_EYE", "摄像头"))self.video.setText(_translate("DREAM_EYE", "视频文件"))self.label.setText(_translate("DREAM_EYE", "视频地址:"))self.select_video.setText(_translate("DREAM_EYE", "选择文件"))self.start_skim.setText(_translate("DREAM_EYE", "确定"))self.groupBox_3.setTitle(_translate("DREAM_EYE", "疲劳检测"))self.eye.setText(_translate("DREAM_EYE", "闭眼检测"))self.ha.setText(_translate("DREAM_EYE", "哈欠检测"))self.head.setText(_translate("DREAM_EYE", "瞌睡检测"))self.tired.setText(_translate("DREAM_EYE", "疲劳预警"))self.groupBox_5.setTitle(_translate("DREAM_EYE", "输出"))self.groupBox_4.setTitle(_translate("DREAM_EYE", "检测设置"))self.label_3.setText(_translate("DREAM_EYE", "疲劳时间:"))self.label_4.setText(_translate("DREAM_EYE", "疲劳次数:"))

检测效果

效果还不错,不过对于不是那么明显的情况可能就不是很好了,如光线不好,人脸不全的情况。
接下来可以放到你笔记本上,让他定时提醒你休息。

源代码

https://github.com/cdmstrong/tried


http://chatgpt.dhexx.cn/article/bPG2b1KM.shtml

相关文章

基于Matlab深度学习的驾驶员疲劳检测系统

随着城市化进程不断加快,中国汽车的需求 量和保有量也急剧上升。 截至 2020 年 9 月,中国 汽车保有量达到了 2.75 亿辆,随着车辆保有量的 增加,交通事故的发生率也在逐年上升。2017 年交通事故共计 20.3 万起,因车祸死亡人数 6.3 万 人,2018 年交通事故较 2017 年上升 20.6%,20…

基于MATLAB的疲劳检测系统

基于MATLAB的疲劳检测系统 一、课题介绍 随着汽车工业的不断发展,随之而来的社会问题也愈加严重。交通事故给人们造成巨大伤害的同时,也给社会带来沉重的负担和影响。由于疲劳驾驶是引起交通事故的一个主要原因。因此,研究一种合理有效、实时准确检测驾驶员疲劳驾驶的非接触式车…

基于图像分割的疲劳检测方法研究

问题&#xff1a; 随着社会的不断进步,汽车已经成为了当今世界拥有主宰地位的交通工具。然而汽车数量的上升同时也导致交通事故数量猛增,由司机疲劳驾驶引起的交通事故的发生频率更是不断攀升。疲劳的复杂性引起了各个学科的研究者广泛关注&#xff0c;传统的疲劳评估方法不仅需…

疲劳检测实验报告

疲劳检测实验报告 邢益玮 201930101151 2021/1/13 &#xff08;重度拖延症了&#xff0c;内容又有点多&#xff0c;学长和老师不好意思了&#x1f64f;&#x1f64f;&#x1f64f;&#xff09; 文章目录 疲劳检测实验报告前言一、最初的尝试——dlib库1.1 信息收集1.2 为Anac…

基于MATLAB的疲劳检测

在疲劳检测算法中&#xff0c;个人感觉最好的算法是 Dlib 这个库可以实现人脸的关键点的检测&#xff0c;有了人眼睛的点位&#xff0c;便可以检测眨眼之类的频率来进行人眼识别&#xff1b; 但是在matlab中调用dlip需要比较复杂的操做&#xff0c;有兴趣的可以网上搜索做法&am…

MATLAB疲劳检测系统

目录 摘要 I Abstract II 1 绪论 1 1.1 研究背景及意义 1 1.2 国内外疲劳驾驶研究现状 2 1.3本文的主要研究内容及组织结构 3 2 人脸检测与定位技术 4 2.1人脸检测与定位技术概述 4 2.1.1基于图像的人脸检测方法 4 2.1.2基于特征的人脸检测方法 5 2.2 Adaboost算法介绍 6 2.2.1…

python疲劳检测

疲劳驾驶检测 结合眼睛的闭合状态和嘴巴闭合状态&#xff0c;综合判断驾驶人员的疲劳状况。python编写&#xff0c;tensorflow&#xff0c;opencv和dlib实现人脸的检测和特征点提取。 效果图&#xff1a; 效果视频: python opencv 疲劳驾驶检测 项目代码下载&#xff1a; pyt…

疲劳检测(一)Landmark + HeadPose

数据集 1&#xff09;Drazy 数据集&#xff1a;红外&#xff0c;包含多种用于疲劳检测的数据&#xff0c;(有电极) 14*3*10min get http://www.drozy.ulg.ac.be/ 2&#xff09;NTHU 驾驶员疲劳检测数据集 http://cv.cs.nthu.edu.tw/php/callforpaper/datasets/DDD/ 3&am…

26.疲劳检测

目录 1 项目介绍 2 代码实现 2.1 导入库 2.2 定义68个关键点 2.3 定义eye_aspect_ratio() 2.4 定义参数 2.5 定义阈值 2.6 定义次数 2.7 创建检测器 2.8 获取左眼与右眼的起始点与终止点 2.9 读取视频 2.10 定义shape_to_np() 2.11 遍历每一帧 2.11…

Dlib模型之驾驶员疲劳检测一(眨眼)

目录 序目的技术背景 正文&#xff08;1&#xff09;环境搭建&#xff08;2&#xff09;下载开源数据集&#xff08;3&#xff09;视觉疲劳检测原理&#xff08;4&#xff09;主要代码思路&#xff08;5&#xff09;运行效果 序 目的 经查阅相关文献&#xff0c;疲劳在人体面…

Python人脸检测实战之疲劳检测

本文主要介绍了实现疲劳检测&#xff1a;如果眼睛已经闭上了一段时间&#xff0c;我们会认为他们开始打瞌睡并发出警报来唤醒他们并引起他们的注意&#xff0c;感兴趣的朋友可以了解一下。 今天我们实现疲劳检测。 如果眼睛已经闭上了一段时间&#xff0c;我们会认为他们开始打…

计算机视觉项目实战-驾驶员疲劳检测

&#x1f60a;&#x1f60a;&#x1f60a;欢迎来到本博客&#x1f60a;&#x1f60a;&#x1f60a; 本次博客内容将继续讲解关于OpenCV的相关知识 &#x1f389;作者简介&#xff1a;⭐️⭐️⭐️目前计算机研究生在读。主要研究方向是人工智能和群智能算法方向。目前熟悉深度学…

【毕业设计】深度学习疲劳检测 驾驶行为检测 - python opencv cnn

文章目录 0 前言1 课题背景2 相关技术2.1 Dlib人脸识别库2.2 疲劳检测算法2.3 YOLOV5算法 3 效果展示3.1 眨眼3.2 打哈欠3.3 使用手机检测3.4 抽烟检测3.5 喝水检测 4 最后 0 前言 &#x1f525; 这两年开始毕业设计和毕业答辩的要求和难度不断提升&#xff0c;传统的毕设题目…

Pandas和Numpy:常见函数参数inplace的作用

1.inplace参数的作用 pandas中&#xff0c;包括numpy中很多方法都设置了inplace参数。该参数的主要作用是指示是否在本地数据上做更改&#xff0c;其只能接受bool型参数&#xff0c;即 True和False. 具体示例如下&#xff1a; data为原始数据。接下来用dataframe.drop()删除Ti…

编译inplace_abn

出现以上问题 1、检查pytorch版本&#xff0c;一般1.5以上 2、检查cuda版本&#xff0c;一般10.1以上 3、检查环境变量设置 linux&#xff1a;export -p 重点检查 PATH 、LD_LIBRARY_PATH 不能重复设置变量&#xff0c;将重复的删掉 unset LANG 是将LANG删掉 declare -x LANG是…

build_ext --inplace 是什么意思

如果是做目标检测类的任务&#xff0c;经常需要用到cocoapi python setup.py build_ext --inplacebuild_ext是指明python生成C/C的扩展模块(build C/C extensions (compile/link to build directory))--inplace指示 将编译后的扩展模块直接放在与test.py同级的目录中。 流程如…

Bug集和之3:set_index 设定索引——inplace参数

知识点&#xff1a;set_index() 问题发现&#xff1a; 需要设定索引&#xff0c;以获取特定行的数据 &#xff0c;但最后结果运行的不理想 初始数据 index000056.OF…01998-3-27NaN……………70302023-1-132.573… 过程代码 result.set_index(index) df result.loc[20…

python distutils打包C/C++模块,执行python setup.py build_ext --inplace时报错cl

一、问题发生环境 python可以把C/C代码编译并打包为pyd模块&#xff0c;从而可以使python脚本直接调用C/C模块功能。 我在执行python setup.py build_ext --inplace时遇到了缺失cl.exe的错误提示&#xff0c;然后用pip安装了cl。 再次编译&#xff0c;提示cl: error: no such o…

inplace-operation-error 【已解决】

最近在搞CT医学图像分割模型的领域泛化优化&#xff0c;结果就出现了报错&#xff1a; 关于这个问题stackoverflow上有非常多的讨论&#xff0c;可以过去围观&#xff1a; 指路&#xff1a;中文版stackoverflow - 堆栈内存溢出 (stackoom.com) Stack Overflow - Where Develo…

pandas数据排序sort_values后面inplace=True与inplace=False的实例驱动理解

目 录 1 引子 2 inplace参数理论理解 3 inplace参数实例驱动理解 3.1 inplace True 3.2 inplace False 1 引子 Series 的排序&#xff1a;Series.sort_values(ascendingTrue, inplaceFalse) 参数说明&#xff1a; ascending&#xff1a;默认为True升序排序&#xff0c;为F…