白云岛资源网 Design By www.pvray.com

之前已经介绍过人脸识别的基础概念,以及基于opencv的实现方式,今天,我们使用dlib来提取128维的人脸嵌入,并使用k临近值方法来实现人脸识别。

人脸识别系统的实现流程与之前是一样的,只是这里我们借助了dlib和face_recognition这两个库来实现。face_recognition是对dlib库的包装,使对dlib的使用更方便。所以首先要安装这2个库。

pip3 install dlib
pip3 install face_recognition

然后,还要安装imutils库

 pip3 install imutils

我们看一下项目的目录结构:

.
├── dataset
│   ├── alan_grant [22 entries exceeds filelimit, not opening dir]
│   ├── claire_dearing [53 entries exceeds filelimit, not opening dir]
│   ├── ellie_sattler [31 entries exceeds filelimit, not opening dir]
│   ├── ian_malcolm [41 entries exceeds filelimit, not opening dir]
│   ├── john_hammond [36 entries exceeds filelimit, not opening dir]
│   └── owen_grady [35 entries exceeds filelimit, not opening dir]
├── examples
│   ├── example_01.png
│   ├── example_02.png
│   └── example_03.png
├── output
│   ├── lunch_scene_output.avi
│   └── webcam_face_recognition_output.avi
├── videos
│   └── lunch_scene.mp4
├── encode_faces.py
├── encodings.pickle
├── recognize_faces_image.py
├── recognize_faces_video_file.py
├── recognize_faces_video.py
└── search_bing_api.py
 
10 directories, 12 files

首先,提取128维的人脸嵌入:

命令如下:

python3 encode_faces.py --dataset dataset --encodings encodings.pickle -d hog

记住:如果你的电脑内存不够大,请使用hog模型进行人脸检测,如果内存够大,可以使用cnn神经网络进行人脸检测。

看代码:

# USAGE
# python encode_faces.py --dataset dataset --encodings encodings.pickle
 
# import the necessary packages
from imutils import paths
import face_recognition
import argparse
import pickle
import cv2
import os
 
# construct the argument parser and parse the arguments
ap = argparse.ArgumentParser()
ap.add_argument("-i", "--dataset", required=True,
	help="path to input directory of faces + images")
ap.add_argument("-e", "--encodings", required=True,
	help="path to serialized db of facial encodings")
ap.add_argument("-d", "--detection-method", type=str, default="hog",
	help="face detection model to use: either `hog` or `cnn`")
args = vars(ap.parse_args())
 
# grab the paths to the input images in our dataset
print("[INFO] quantifying faces...")
imagePaths = list(paths.list_images(args["dataset"]))
 
# initialize the list of known encodings and known names
knownEncodings = []
knownNames = []
 
# loop over the image paths
for (i, imagePath) in enumerate(imagePaths):
	# extract the person name from the image path
	print("[INFO] processing image {}/{}".format(i + 1,
		len(imagePaths)))
	name = imagePath.split(os.path.sep)[-2]
 
	# load the input image and convert it from RGB (OpenCV ordering)
	# to dlib ordering (RGB)
	image = cv2.imread(imagePath)
	rgb = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
 
	# detect the (x, y)-coordinates of the bounding boxes
	# corresponding to each face in the input image
	boxes = face_recognition.face_locations(rgb,
		model=args["detection_method"])
 
	# compute the facial embedding for the face
	encodings = face_recognition.face_encodings(rgb, boxes)
 
	# loop over the encodings
	for encoding in encodings:
		# add each encoding + name to our set of known names and
		# encodings
		knownEncodings.append(encoding)
		knownNames.append(name)
 
# dump the facial encodings + names to disk
print("[INFO] serializing encodings...")
data = {"encodings": knownEncodings, "names": knownNames}
f = open(args["encodings"], "wb")
f.write(pickle.dumps(data))
f.close()

输出结果是每张图片输出一个人脸的128维的向量和对于的名字,并序列化到硬盘,供后续人脸识别使用。

识别图像中的人脸:

这里使用KNN方法实现最终的人脸识别,而不是使用SVM进行训练。

命令如下:

python3 recognize_faces_image.py --encodings encodings.pickle 	--image examples/example_01.png

看代码:

# USAGE
# python recognize_faces_image.py --encodings encodings.pickle --image examples/example_01.png 
 
# import the necessary packages
import face_recognition
import argparse
import pickle
import cv2
 
# construct the argument parser and parse the arguments
ap = argparse.ArgumentParser()
ap.add_argument("-e", "--encodings", required=True,
	help="path to serialized db of facial encodings")
ap.add_argument("-i", "--image", required=True,
	help="path to input image")
ap.add_argument("-d", "--detection-method", type=str, default="cnn",
	help="face detection model to use: either `hog` or `cnn`")
args = vars(ap.parse_args())
 
# load the known faces and embeddings
print("[INFO] loading encodings...")
data = pickle.loads(open(args["encodings"], "rb").read())
 
# load the input image and convert it from BGR to RGB
image = cv2.imread(args["image"])
rgb = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
 
# detect the (x, y)-coordinates of the bounding boxes corresponding
# to each face in the input image, then compute the facial embeddings
# for each face
print("[INFO] recognizing faces...")
boxes = face_recognition.face_locations(rgb,
	model=args["detection_method"])
encodings = face_recognition.face_encodings(rgb, boxes)
 
# initialize the list of names for each face detected
names = []
 
# loop over the facial embeddings
for encoding in encodings:
	# attempt to match each face in the input image to our known
	# encodings
	matches = face_recognition.compare_faces(data["encodings"],
		encoding)
	name = "Unknown"
 
	# check to see if we have found a match
	if True in matches:
		# find the indexes of all matched faces then initialize a
		# dictionary to count the total number of times each face
		# was matched
		matchedIdxs = [i for (i, b) in enumerate(matches) if b]
		counts = {}
 
		# loop over the matched indexes and maintain a count for
		# each recognized face face
		for i in matchedIdxs:
			name = data["names"][i]
			counts[name] = counts.get(name, 0) + 1
 
		# determine the recognized face with the largest number of
		# votes (note: in the event of an unlikely tie Python will
		# select first entry in the dictionary)
		name = max(counts, key=counts.get)
	
	# update the list of names
	names.append(name)
 
# loop over the recognized faces
for ((top, right, bottom, left), name) in zip(boxes, names):
	# draw the predicted face name on the image
	cv2.rectangle(image, (left, top), (right, bottom), (0, 255, 0), 2)
	y = top - 15 if top - 15 > 15 else top + 15
	cv2.putText(image, name, (left, y), cv2.FONT_HERSHEY_SIMPLEX,
		0.75, (0, 255, 0), 2)
 
# show the output image
cv2.imshow("Image", image)
cv2.waitKey(0)

实际效果如下:

Python基于Dlib的人脸识别系统的实现

如果要详细了解细节,请参考:https://www.pyimagesearch.com/2018/06/18/face-recognition-with-opencv-python-and-deep-learning/

白云岛资源网 Design By www.pvray.com
广告合作:本站广告合作请联系QQ:858582 申请时备注:广告合作(否则不回)
免责声明:本站资源来自互联网收集,仅供用于学习和交流,请遵循相关法律法规,本站一切资源不代表本站立场,如有侵权、后门、不妥请联系本站删除!
白云岛资源网 Design By www.pvray.com

《魔兽世界》大逃杀!60人新游玩模式《强袭风暴》3月21日上线

暴雪近日发布了《魔兽世界》10.2.6 更新内容,新游玩模式《强袭风暴》即将于3月21 日在亚服上线,届时玩家将前往阿拉希高地展开一场 60 人大逃杀对战。

艾泽拉斯的冒险者已经征服了艾泽拉斯的大地及遥远的彼岸。他们在对抗世界上最致命的敌人时展现出过人的手腕,并且成功阻止终结宇宙等级的威胁。当他们在为即将于《魔兽世界》资料片《地心之战》中来袭的萨拉塔斯势力做战斗准备时,他们还需要在熟悉的阿拉希高地面对一个全新的敌人──那就是彼此。在《巨龙崛起》10.2.6 更新的《强袭风暴》中,玩家将会进入一个全新的海盗主题大逃杀式限时活动,其中包含极高的风险和史诗级的奖励。

《强袭风暴》不是普通的战场,作为一个独立于主游戏之外的活动,玩家可以用大逃杀的风格来体验《魔兽世界》,不分职业、不分装备(除了你在赛局中捡到的),光是技巧和战略的强弱之分就能决定出谁才是能坚持到最后的赢家。本次活动将会开放单人和双人模式,玩家在加入海盗主题的预赛大厅区域前,可以从强袭风暴角色画面新增好友。游玩游戏将可以累计名望轨迹,《巨龙崛起》和《魔兽世界:巫妖王之怒 经典版》的玩家都可以获得奖励。