1. 获取点云与显示点云
博主做项目是用realsense系列获取深度信息的,获得对齐的RGB图和深度图。
获取数据的代码如下:
## License: Apache 2.0. See LICENSE file in root directory.
## Copyright(c) 2017 Intel Corporation. All Rights Reserved.#####################################################
## Align Depth to Color and get data ##
###################################################### First import the library
import pyrealsense2 as rs
# Import Numpy for easy array manipulation
import numpy as np
# Import OpenCV for easy image rendering
import cv2
import time
import os# Declare pointcloud object, for calculating pointclouds and texture mappings
pc = rs.pointcloud()
# We want the points object to be persistent so we can display the last cloud when a frame drops
points = rs.points()# Create a pipeline
pipeline = rs.pipeline()# Create a config and configure the pipeline to stream
# different resolutions of color and depth streams
config = rs.config()
config.enable_stream(rs.stream.depth, 640, 360, rs.format.z16, 30)
config.enable_stream(rs.stream.color, 640, 480, rs.format.bgr8, 30)# Start streaming
profile = pipeline.start(config)# Getting the depth sensor's depth scale (see rs-align example for explanation)
depth_sensor = profile.get_device().first_depth_sensor()
depth_scale = depth_sensor.get_depth_scale()
print("Depth Scale is: " , depth_scale)# We will be removing the background of objects more than
# clipping_distance_in_meters meters away
clipping_distance_in_meters = 1 #1 meter
clipping_distance = clipping_distance_in_meters / depth_scale# Create an align object
# rs.align allows us to perform alignment of depth frames to others frames
# The "align_to" is the stream type to which we plan to align depth frames.
align_to = rs.stream.color
align = rs.align(align_to)save_dir = "RGBD_dataset"def check_dir(save_dir):if not os.path.exists(save_dir):os.makedirs(save_dir)return save_dirn = 0# Streaming loop
try:while True:# Get frameset of color and depthframes = pipeline.wait_for_frames()# frames.get_depth_frame() is a 640x360 depth image# Align the depth frame to color framealigned_frames = align.process(frames)# Get aligned framesaligned_depth_frame = aligned_frames.get_depth_frame() # aligned_depth_frame is a 640x480 depth imagecolor_frame = aligned_frames.get_color_frame()# Validate that both frames are validif not aligned_depth_frame or not color_frame:continuecolor_image = np.asanyarray(color_frame.get_data())save_dir = check_dir(save_dir)name = str(n)depth_image = np.asanyarray(aligned_depth_frame.get_data())#print "Saveing to *.png ..."depth_name = name + ".png"cv2.imwrite(os.path.join(save_dir, depth_name), depth_image)image_name = name + ".jpg"print "Saveing to *.jpg ..."cv2.imwrite(os.path.join(save_dir, image_name), color_image)n += 1print("Frame num is %s" % n)if n == 1000:print("You get 1000 RGBD-imgs!")breakcv2.namedWindow('Align Example', cv2.WINDOW_AUTOSIZE)cv2.imshow('Align Example', color_image)key = cv2.waitKey(1)# Press esc or 'q' to close the image windowif key & 0xFF == ord('q') or key == 27:cv2.destroyAllWindows()break
finally:pipeline.stop()
点云的显示请参考我的博客:Open3D——RGBD图转化为点云(pcd)并显示
2. 人脸点云处理关键步骤
(1) 点云分割
以人脸鼻尖为球心,8cm半径组成的球,只保留在球内的点云同时去除无效的点云,即分割出人脸点云,去除了背景的干扰。
(2)计算人脸在空间的旋转角度,并实现人脸对齐
根据人脸关键点,求出人脸的姿态,即Pitch,Yaw,Roll三个偏转角。
根据齐次坐标变换,获得对齐的人脸点云。
我处理后的结果如下(用MeshLab显示):
(3)法向量估计算法:基于Open3D计算每个点的法向量。
(4)点云的信息投影到平面:根基相机模型把深度的法向量投影到平面内,并归一化,变为人眼可见的3通道图片。
我的投影效果如图所示:
注:处理开源的3D人脸点云数据,请参考我的博客:3D人脸识别——点云转化为可训练的图片。
以上点云处理的所有代码由python编写,如需技术交流,请联系博主(下方二维码)。