基于结构光3D视觉的空间姿态识别系统设计

谭小兵, 李伯明, 陶文华

集成电路与嵌入式系统 ›› 2023, Vol. 23 ›› Issue (11) : 33-36.

PDF(976 KB)
PDF(976 KB)
集成电路与嵌入式系统 ›› 2023, Vol. 23 ›› Issue (11) : 33-36.
技术纵横

基于结构光3D视觉的空间姿态识别系统设计

  • 谭小兵, 李伯明, 陶文华
作者信息 +

Design of Spatial Pose Recognition System Based on Structured Light 3D Vision

  • Tan Xiaobing, Li Boming, Tao Wenhua
Author information +
文章历史 +

摘要

为提高产品零件空间姿态识别的精度和收敛速度,提出基于结构光3D视觉对空间姿态识别的方法。首先,采用投影仪和相机获取产品零件图像信息,利用相移法获取深度信息,通过深度图点云重建获取3D点云数据;然后对3D点云数据进行特征处理和分类时,建立点云网络(Point Network,PointNet)模型;最后,采用改进的迭代最近点(Iterative Closest Point, ICP)算法对3D点云数据配准,从而实现产品零件姿态的识别。实验结果表明,该方法在对产品零件点云特征分类性能上,准确率能达到96%左右,召回率能稳定在92%左右;在配准精度和收敛速度上,较其他两种方法更优越,进一步验证了该方法的有效性和可行性。

Abstract

In order to improve the precision and convergence speed of spatial attitude recognition of product parts,a method of spatial attitude recognition based on structured light 3D vision is proposed.Firstly,image information of product parts is obtained by projector and camera,depth information is obtained by phase shift method,and 3D point cloud data is obtained by point cloud reconstruction of depth map.Then,when the 3D Point cloud data is processed and classified,the Point Network (PointNet) model is established.Finally,an improved Iterative Closest Point (ICP) algorithm is used to register 3D point cloud data,so as to realize the identification of product part attitude.The experiment results show that the accuracy of the method can reach about 96% and the recall rate can be stable at about 92%.In terms of registration accuracy and convergence speed,it is superior to the other two methods.The effectiveness and feasibility of this method are further verified.

关键词

3D视觉 / 空间姿态识别 / PointNet模型 / ICP算法 / 点云数据

Key words

3D vision / spatial attitude recognition / PointNet model / ICP algorithm / point cloud data

引用本文

导出引用
谭小兵, 李伯明, 陶文华. 基于结构光3D视觉的空间姿态识别系统设计[J]. 集成电路与嵌入式系统. 2023, 23(11): 33-36
Tan Xiaobing, Li Boming, Tao Wenhua. Design of Spatial Pose Recognition System Based on Structured Light 3D Vision[J]. Integrated Circuits and Embedded Systems. 2023, 23(11): 33-36
中图分类号: TP31   

参考文献

[1] 苏子钦.基于双目视觉的按键操作机械臂目标识别与定位研究[D].德阳:中国民用航空飞行学院,2022.
[2] 江兆银,黄树成,陶哲.基于姿态估计的动作识别方法研究[J].江西科学,2023,41(3): 581-586,613.
[3] 王钱芊,齐林,刘治国.空间光学特性的运动信息采集与姿态识别[J].激光杂志,2021,42(4):91-95.
[4] 余锦淼,吴静静.基于结构光3D视觉的槟榔姿态识别与定位系统[J].激光与光电子学进展,2023,60(16):1615010.
[5] 陈艺海.基于结构光3D视觉的场景目标6D位姿估计问题研究[D].桂林:桂林电子科技大学,2021.
[6] 王韶格.融合多色结构光和光场信息的低纹理目标三维重建技术研究[D].太原:中北大学,2023.
[7] 李文悦.基于深度图预测的三维点云重建算法研究[D].长春:吉林大学,2022.
[8] 阮国强,曹雏清.基于PointNet++的机器人抓取姿态估计[J].仪表技术与传感器,2023(5):44-48.
[9] 阮剑,朱连海,胡三宝.基于PointNet的车身分割方法[J].武汉大学学报(工学版),2023,56(3):347-352.
[10] 朱玉梅,邢明义,蔡静.基于法向量权重改进的ICP算法[J].计量学报,2023,44(6):852-857.
[11] 陈发毅.基于深度学习与3D视觉的物体位姿估计研究[D].成都:西华大学,2022.

PDF(976 KB)

Accesses

Citation

Detail

段落导航
相关文章

/