OpenFace使用OpenFace进行人脸识别

article/2025/9/24 18:34:26

1.OpenFace简介

http://cmusatyalab.github.io/openface/
安装就按照官方教程来就好了

装完各种依赖之后运行一下命令

https://github.com/cmusatyalab/openface.git --recursive
cd openface 
sudo python setup.py install 
sh modles/get-models.sh

2.素材准备

准备两个人的不同角度的照片我这准备了李、王两个人各三十张照片作为训练集,各六张张做为测试集;训练数据放在openface/ws_train_data路径下

这里写图片描述

这里写图片描述
下图是训练集图片:
这里写图片描述

测试集数据:

这里写图片描述
这里需要注意的是bmp文件是不被支持用于目标检测的

3.预处理(目标检测)

进入openface的根目录,输入

$ ./util/align-dlib.py ws_train_data/ align outerEyesAndNose ws_pre_process

ws_train_data/这个路径下的使我们训练集的数据,ws_pre_process是预处理结果的路径

一切顺利的话,会看到类似下面的输出:
这里写图片描述

在相应的路径你会看到这两个人的预处理结果,人脸被单独地截取出来了
这里写图片描述.

这里写图片描述

4.生成相应的脸部特征表示数据

./batch-represent/main.lua -outDir ws_represent_data -data ws_pre_process/

ws_represent_data是输出路径,ws_pre_process/使我们的预处理结果路径

这里写图片描述

会在相应的ws_represent_data看到相应的两个CSV文件

这里写图片描述

5.训练模型

./demos/classifier.py train ws_represent_data/

会得到下列的输出
这里写图片描述


我在这里发现了一个Bug,就是在OpenFace根路径下的demos下的classifier.py文件中的LDA()函数已经不被sklearn库替换为了LinearDiscriminantAnalysis()

原先是这样的:

 clf = Pipeline([('lda', LDA(n_components=args.ldaDim)),('clf', clf_final)])

应该导入相应的模块,然后修改如下即可:

clf = Pipeline([('lda', LinearDiscriminantAnalysis(n_components=args.ldaDim)),('clf', clf_final)])

这里贴出完整的classifier.py

#!/usr/bin/env python2
#
# Example to classify faces.
# Brandon Amos
# 2015/10/11
#
# Copyright 2015-2016 Carnegie Mellon University
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.import time
start = time.time()import argparse
import cv2
import os
import pickle
import sysfrom operator import itemgetterimport numpy as np
np.set_printoptions(precision=2)
import pandas as pdimport openface
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis
from sklearn.pipeline import Pipeline
# from sklearn.lda import LDA
from sklearn.preprocessing import LabelEncoder
from sklearn.svm import SVC
from sklearn.grid_search import GridSearchCV
from sklearn.mixture import GMM
from sklearn.tree import DecisionTreeClassifier
from sklearn.naive_bayes import GaussianNBfileDir = os.path.dirname(os.path.realpath(__file__))
modelDir = os.path.join(fileDir, '..', 'models')
dlibModelDir = os.path.join(modelDir, 'dlib')
openfaceModelDir = os.path.join(modelDir, 'openface')def getRep(imgPath, multiple=False):start = time.time()bgrImg = cv2.imread(imgPath)if bgrImg is None:raise Exception("Unable to load image: {}".format(imgPath))rgbImg = cv2.cvtColor(bgrImg, cv2.COLOR_BGR2RGB)if args.verbose:print("  + Original size: {}".format(rgbImg.shape))if args.verbose:print("Loading the image took {} seconds.".format(time.time() - start))start = time.time()if multiple:bbs = align.getAllFaceBoundingBoxes(rgbImg)else:bb1 = align.getLargestFaceBoundingBox(rgbImg)bbs = [bb1]if len(bbs) == 0 or (not multiple and bb1 is None):raise Exception("Unable to find a face: {}".format(imgPath))if args.verbose:print("Face detection took {} seconds.".format(time.time() - start))reps = []for bb in bbs:start = time.time()alignedFace = align.align(args.imgDim,rgbImg,bb,landmarkIndices=openface.AlignDlib.OUTER_EYES_AND_NOSE)if alignedFace is None:raise Exception("Unable to align image: {}".format(imgPath))if args.verbose:print("Alignment took {} seconds.".format(time.time() - start))print("This bbox is centered at {}, {}".format(bb.center().x, bb.center().y))start = time.time()rep = net.forward(alignedFace)if args.verbose:print("Neural network forward pass took {} seconds.".format(time.time() - start))reps.append((bb.center().x, rep))sreps = sorted(reps, key=lambda x: x[0])return srepsdef train(args):print("Loading embeddings.")fname = "{}/labels.csv".format(args.workDir)labels = pd.read_csv(fname, header=None).as_matrix()[:, 1]labels = map(itemgetter(1),map(os.path.split,map(os.path.dirname, labels)))  # Get the directory.fname = "{}/reps.csv".format(args.workDir)embeddings = pd.read_csv(fname, header=None).as_matrix()le = LabelEncoder().fit(labels)labelsNum = le.transform(labels)nClasses = len(le.classes_)print("Training for {} classes.".format(nClasses))if args.classifier == 'LinearSvm':clf = SVC(C=1, kernel='linear', probability=True)elif args.classifier == 'GridSearchSvm':print("""Warning: In our experiences, using a grid search over SVM hyper-parameters onlygives marginally better performance than a linear SVM with C=1 andis not worth the extra computations of performing a grid search.""")param_grid = [{'C': [1, 10, 100, 1000],'kernel': ['linear']},{'C': [1, 10, 100, 1000],'gamma': [0.001, 0.0001],'kernel': ['rbf']}]clf = GridSearchCV(SVC(C=1, probability=True), param_grid, cv=5)elif args.classifier == 'GMM':  # Doesn't work bestclf = GMM(n_components=nClasses)# ref:# http://scikit-learn.org/stable/auto_examples/classification/plot_classifier_comparison.html#example-classification-plot-classifier-comparison-pyelif args.classifier == 'RadialSvm':  # Radial Basis Function kernel# works better with C = 1 and gamma = 2clf = SVC(C=1, kernel='rbf', probability=True, gamma=2)elif args.classifier == 'DecisionTree':  # Doesn't work bestclf = DecisionTreeClassifier(max_depth=20)elif args.classifier == 'GaussianNB':clf = GaussianNB()# ref: https://jessesw.com/Deep-Learning/elif args.classifier == 'DBN':from nolearn.dbn import DBNclf = DBN([embeddings.shape[1], 500, labelsNum[-1:][0] + 1],  # i/p nodes, hidden nodes, o/p nodeslearn_rates=0.3,# Smaller steps mean a possibly more accurate result, but the# training will take longerlearn_rate_decays=0.9,# a factor the initial learning rate will be multiplied by# after each iteration of the trainingepochs=300,  # no of iternation# dropouts = 0.25, # Express the percentage of nodes that# will be randomly dropped as a decimal.verbose=1)if args.ldaDim > 0:clf_final = clfclf = Pipeline([('lda', LinearDiscriminantAnalysis(n_components=args.ldaDim)),('clf', clf_final)])# clf = Pipeline([('lda', LDA(n_components=args.ldaDim)),#                 ('clf', clf_final)])clf.fit(embeddings, labelsNum)fName = "{}/classifier.pkl".format(args.workDir)print("Saving classifier to '{}'".format(fName))with open(fName, 'w') as f:pickle.dump((le, clf), f)def infer(args, multiple=False):with open(args.classifierModel, 'rb') as f:if sys.version_info[0] < 3:(le, clf) = pickle.load(f)else:(le, clf) = pickle.load(f, encoding='latin1')for img in args.imgs:print("\n=== {} ===".format(img))reps = getRep(img, multiple)if len(reps) > 1:print("List of faces in image from left to right")for r in reps:rep = r[1].reshape(1, -1)bbx = r[0]start = time.time()predictions = clf.predict_proba(rep).ravel()maxI = np.argmax(predictions)person = le.inverse_transform(maxI)confidence = predictions[maxI]if args.verbose:print("Prediction took {} seconds.".format(time.time() - start))if multiple:print("Predict {} @ x={} with {:.2f} confidence.".format(person.decode('utf-8'), bbx,confidence))else:print("Predict {} with {:.2f} confidence.".format(person.decode('utf-8'), confidence))if isinstance(clf, GMM):dist = np.linalg.norm(rep - clf.means_[maxI])print("  + Distance from the mean: {}".format(dist))if __name__ == '__main__':parser = argparse.ArgumentParser()parser.add_argument('--dlibFacePredictor',type=str,help="Path to dlib's face predictor.",default=os.path.join(dlibModelDir,"shape_predictor_68_face_landmarks.dat"))parser.add_argument('--networkModel',type=str,help="Path to Torch network model.",default=os.path.join(openfaceModelDir,'nn4.small2.v1.t7'))parser.add_argument('--imgDim', type=int,help="Default image dimension.", default=96)parser.add_argument('--cuda', action='store_true')parser.add_argument('--verbose', action='store_true')subparsers = parser.add_subparsers(dest='mode', help="Mode")trainParser = subparsers.add_parser('train',help="Train a new classifier.")trainParser.add_argument('--ldaDim', type=int, default=-1)trainParser.add_argument('--classifier',type=str,choices=['LinearSvm','GridSearchSvm','GMM','RadialSvm','DecisionTree','GaussianNB','DBN'],help='The type of classifier to use.',default='LinearSvm')trainParser.add_argument('workDir',type=str,help="The input work directory containing 'reps.csv' and 'labels.csv'. Obtained from aligning a directory with 'align-dlib' and getting the representations with 'batch-represent'.")inferParser = subparsers.add_parser('infer', help='Predict who an image contains from a trained classifier.')inferParser.add_argument('classifierModel',type=str,help='The Python pickle representing the classifier. This is NOT the Torch network model, which can be set with --networkModel.')inferParser.add_argument('imgs', type=str, nargs='+',help="Input image.")inferParser.add_argument('--multi', help="Infer multiple faces in image",action="store_true")args = parser.parse_args()if args.verbose:print("Argument parsing and import libraries took {} seconds.".format(time.time() - start))if args.mode == 'infer' and args.classifierModel.endswith(".t7"):raise Exception("""
Torch network model passed as the classification model,
which should be a Python pickle (.pkl)See the documentation for the distinction between the Torch
network and classification models:http://cmusatyalab.github.io/openface/demo-3-classifier/http://cmusatyalab.github.io/openface/training-new-models/Use `--networkModel` to set a non-standard Torch network model.""")start = time.time()align = openface.AlignDlib(args.dlibFacePredictor)net = openface.TorchNeuralNet(args.networkModel, imgDim=args.imgDim,cuda=args.cuda)if args.verbose:print("Loading the dlib and OpenFace models took {} seconds.".format(time.time() - start))start = time.time()if args.mode == 'train':train(args)elif args.mode == 'infer':infer(args, args.multi)

6.识别人脸

./demos/classifier.py infer ./ws_represent_data/classifier.pkl ws_test_data/{test6,test7}/*.jpg

这里写图片描述
从当前测试集合的测试结果来看,准确率100%

但是这里会出现一个问题,当系统在测试集中没有检测到人脸的时候会导致无法继续预测当前路径下的下一张图片

这里写图片描述

我想对classifier.py稍作修改应该可以解决这个问题。


http://chatgpt.dhexx.cn/article/O9B32xUP.shtml

相关文章

【OpenFace】

OpenFace&#xff1a; http://cmusatyalab.github.io/openface/ 一、什么是Openface&#xff1f; Openface是一个基于深度神经网络的开源人脸识别系统。该系统基于谷歌的文章FaceNet: A Unified Embedding for Face Recognition and Clustering。Openface是卡内基梅隆大学的 B…

OpenFace

OpenFace是一个包含了landmark&#xff0c;head pose&#xff0c;Actionunions&#xff0c;eye gaze等功能&#xff0c;并包含训练和检测所有源码的开源人脸框架&#xff0c;论文为&#xff0c;OpenFace: an open source facial behavior analysis toolkit OpenFace所用到的库包…

win10下openface快速安装与使用

win10下openface快速安装与使用 情况说明环境下载openface下载模型openface的简单使用 情况说明 我发现openface的安装方法五花八门&#xff0c;大多都比较复杂&#xff0c;而openface分很多版本&#xff0c;很多安装教程混在一起&#xff0c;导致我自己安装时下载下混了。 本…

Windows系统下的Openface安装及使用--亲测有效

一、配置openface所需环境 openface主要依赖于opencv和dlib等工具包&#xff0c;工具包安装可winr进入用户终端下载安装&#xff08;需要先下载python&#xff09;&#xff0c;或者下载ananconda&#xff0c;创建anaconda虚拟环境安装&#xff1a; pip install opencv-python…

OpenFace简介

推荐 如下博文 https://blog.csdn.net/qq_14845119/article/details/53994607 OpenFace是一个包含了landmark&#xff0c;head pose&#xff0c;Actionunions&#xff0c;eye gaze等功能&#xff0c;并包含训练和检测所有源码的开源人脸框架&#xff0c;论文为&#xff0c;Ope…

Openface的安装和使用

openface的安装与使用 环境&#xff1a;我的电脑是笔记本电脑&#xff0c;win10系统&#xff0c;用的是pycharm和annaconda。 一、首先下载openface安装包&#xff0c;并且安装 1.下载地址&#xff1a;https://codeload.github.com/cmusatyalab/openface/zip/master 2.下载后…

OpenFace学习(1):安装配置及人脸比对

前言 前几天在网上看到了openface&#xff08;链接&#xff09;&#xff0c;觉得挺有趣就下载配置了一下&#xff0c;稍微修改了一下跑了个demo&#xff0c;效果还是很不错的。这里分享下安装配置的过程以及demo。 简介 openface是一个基于深度神经网络的开源人脸识别系统&a…

“H5移动端App—数据统计分析”项目展示

1、具有切换商城展示功能 2、通过不同的统计图样式分别展示不同的数据

Vant简单H5 web app【小试牛刀】

index.html <!DOCTYPE html> <html><head><meta charset"utf-8"><!--谷歌浏览器&#xff08;手机端&#xff09;顶部颜色--><meta name"msapplication-TileColor" content"#4183fd"><meta name"the…

推荐几个H5、app制作开发工具

我们已经进入移动互联网时代&#xff0c;而app是移动互联网的载体。传统app开发面临成本高、周期长等问题&#xff0c;因此各类快速生成app的工具层出不穷。企业拥有了app才能实现互联网营销和互联网推广。中国有近7000万传统中小型企业&#xff0c;app会帮助这些企业实现互联网…

直接复制php代码制作app,一套免费无代码在线制作APP工具,将APP打包带走

线上营销的火爆离不开人们对APP的依赖&#xff0c;许多小商户已经从很早的时候就开始萌芽出制作APP来为实体店增加生意的想法&#xff0c;然后开发APP对中型企业都是一件成本极高的事情&#xff0c;即便小商户请外包团队需求降到最低&#xff0c;也会产生十几万的费用&#xff…

如何快速成为APP制作、H5制作高手?

App、H5无疑是移动互联网时代的宠儿&#xff0c;无数社交、商业、宣传都在App、H5上实现。掌握App、H5制作技能&#xff0c;无论工作、学习&#xff0c;更胜人一筹。 那么&#xff0c;什么是App呢&#xff1f; App(application的缩写)&#xff0c;即安装在手机上的软件。早期的…

uni-app跨端开发实现APP与H5之间的通讯和交互

最近在研究uni-app跨端开发APP和H5的通讯和交互&#xff0c;比如H5调用APP的方法&#xff0c;APP往H5里面传参&#xff0c;H5往app外面传参。话不多说&#xff0c;上代码&#xff01; html文件放本地的话必须放在项目根目录下的static文件夹 H5调用APP的方法 <!DOCTYPE ht…

利用H5+实现APP在线更新

1 在APP首页添加以下js代码 // 获取本地应用资源版本号 plus.runtime.getProperty(plus.runtime.appid,function(inf){wgtVer inf.version; // mui.toast("当前应用版本&#xff1a;"wgtVer);// 检测更新checkUpdate(); });// 检测更新 var checkUrl "能够返…

uniapp常用打包记录【h5、app、wx小程序】

h5打包 打包前配置 开始打包 打包成功 h5打包之后&#xff0c;如果用的unicloud开发&#xff0c;可以直接上传到“前端网页托管”&#xff0c;然后系统会有个默认域名&#xff0c;我们就可以实现公网访问了 h5的调试工具“vConsole” 参考&#xff1a; (160条消息) vue3 移动端…

H5加壳APP发布Android、IOS应用(证书响应文件制作)

主要步骤&#xff1a; 1.申请一个苹果账号 2.申请ios测试证书&#xff08;p12&#xff09; 3.申请ios描述文件&#xff08;mobileprovision&#xff09; 4.打包ipa 5.安装ipa 一、申请苹果账号 也可以用我注册好的苹果账号体验下&#xff0c;新注册个也简单&#xff0c;…

H5页面的部分IOS兼容处理

IOS兼容处理 环境&#xff1a;uni-app开发H5项目&#xff0c;H5项目链接webview嵌入app中 popup弹窗显示层级问题 问题&#xff1a;遮罩层遮挡位置不正确有白边&#xff0c;提交按钮应该在最底部却显示在popup上面 解决&#xff1a;将popup组件放到外层组件中&#xff0c;避免…

html app启动页制作,【示例】App引导页的制作

注意:本文为5+App的引导页制作方法,uni-app另行制作,推荐使用nvue制作,也可以参考插件市场已经封装的插件https://ext.dcloud.net.cn/plugin?id=192 启动页和引导页 首先澄清一下“启动界面”(splash)和“引导页”(guide)的概念,因为许多刚接触App开发的朋友会搞不清楚这…

H5+app -- 自动更新

一、最近做了一个app自动更新功能&#xff0c;用的基本都是网上找得到的。 1、h5 规范 &#xff1a; http://www.html5plus.org/doc/zh_cn/maps.html 2、环形进度条插件&#xff1a;http://www.jq22.com/jquery-info4495&#xff08;不知道什么原因&#xff0c;下载的时候&…

html5页面和app的区别,H5页面与APP区别何在

APP相信大家都非常熟悉&#xff0c;就是手机里经常安装的软件程序&#xff0c;它都需要经过下载安装的步骤才能打开产品进行操作。不过H5页面和APP却有比较大的区别&#xff0c;它不需要经过下载安装&#xff0c;直接能够在浏览器和微信等社交平台中打开操作。 APP的制作一般都…