学报(中文)

基于完备抓取构型和多阶段网络的软体手抓取

展开
  • 上海交通大学 机械与动力工程学院, 上海 200240
刘文海(1991-),男,河南省辉县市人,博士生,研究方向为深度学习和机器人抓取.

网络出版日期: 2020-06-02

基金资助

国家自然科学基金项目(51675329,51675342),机械系统与振动国家重点实验室课题(GZ2016KF001,GKZD020018),上海交通大学“医工交叉研究基金”(YG2014MS12),特种车辆及其传动系统智能制造国家重点实验室开放课题(GZ2016KF001)

Soft Gripper Grasping Based on Complete Grasp Configuration and Multi-Stage Network

Expand
  • School of Mechanical Engineering, Shanghai Jiao Tong University, Shanghai 200240, China

Online published: 2020-06-02

摘要

视觉引导的软体手抓取依赖视觉输出正确的抓取位置、抓取角度和抓取深度,为此提出了面向多手指软体手的完备抓取构型数学模型和多任务损失函数,设计了基于“锚点”旋转框的两阶段深度学习网络,实现了从图像到软体多手指抓取指令的直接映射.通过公开数据库和自建数据库分析了网络模型的性能表现.研究结果表明:多任务损失函数和基于“锚点”旋转框的两阶段网络模型有效提高了多输出抓取检测的准确率和机器人抓取的成功率.最后,构建了机器人软体手抓取系统,抓取实验结果表明:所提方法对视觉定位误差具有一定的抓取稳健性,可成功抓取不同种类的水果,能够达到96%的抓取成功率,对水果皮的抓取也具有很好的泛化能力.

本文引用格式

刘文海,胡洁,王伟明 . 基于完备抓取构型和多阶段网络的软体手抓取[J]. 上海交通大学学报, 2020 , 54(5) : 507 -514 . DOI: 10.16183/j.cnki.jsjtu.2020.05.008

Abstract

Visual guided robotic grasping of soft gripper depends on correct grasp position, grasp angle and grasp depth, and therefore a complete grasp configuration model and a multi-task loss function for soft gripper are proposed. A two-stage deep learning network based on anchor and rotating blocks is designed to realize direct map from image to multi-gripper grasping. The performance of the network is analyzed by public cornell grasping dataset and self-built dataset. The results show that the two-stage network based on multi-task loss and anchor with rotated blocks improves the accuracy of multi-output grasp detection and increases the success rate of robotic grasping. Finally, the soft robotic grasping system is constructed and the robotic grasping experiment results show that the proposed method provides a certain robustness to vision error, achieves 96% grasp success rate at different fruits, and exhibits a good generalization ability to grasp fruit peel.

参考文献

[1]张进华, 王韬, 洪军, 等. 软体机械手研究综述[J]. 机械工程学报, 2017, 53(13): 19-28. ZHANG Jinhua, WANG Tao, HONG Jun, et al. Review of soft-bodied manipulator[J]. Journal of Mechanical Engineering, 2017, 53(13): 19-28. [2]SNDERHAUF N, BROCK O, SCHEIRER W, et al. The limits and potentials of deep learning for robotics[J]. The International Journal of Robotics Research, 2018, 37(4/5): 405-420. [3]LENZ I, LEE H, SAXENA A. Deep learning for detecting robotic grasps[J]. The International Journal of Robotics Research, 2015, 34(4/5): 705-724. [4]REDMON J, ANGELOVA A. Real-time grasp detection using convolutional neural networks[C]//2015 IEEE International Conference on Robotics and Automation (ICRA). Seattle, WA, USA: IEEE, 2015: 1316-1322. [5]KUMRA S, KANAN C. Robotic grasp detection using deep convolutional neural networks[C]//2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). Vancouver, Canada: IEEE, 2017: 769-776. [6]LIU W H, PAN Z Y, LIU W J, et al. Deep learning for picking point detection in dense cluster[C]//2017 11th Asian Control Conference (ASCC). Gold Coast, Australia: IEEE, 2017: 1644-1649. [7]ZENG A, SONG S R, YU K T, et al. Robotic pick-and-place of novel objects in clutter with multi-affordance grasping and cross-domain image matching[C]//2018 IEEE International Conference on Robotics and Automation (ICRA). Brisbane, Australia: IEEE, 2018: 1-8. [8]彭艳, 刘勇敢, 杨扬, 等. 软体机械手爪在果蔬采摘中的应用研究进展[J]. 农业工程学报, 2018, 34(9): 11-20. PENG Yan, LIU Yonggan, YANG Yang, et al. Research progress on application of soft robotic gripper in fruit and vegetable picking[J]. Transactions of the Chinese Society of Agricultural Engineering, 2018, 34(9): 11-20. [9]JIANG Y, MOSESON S, SAXENA A. Efficient grasping from RGBD images: Learning using a new rectangle representation[C]//2011 IEEE International Conference on Robotics and Automation. Shanghai, China: IEEE, 2011: 3304-3311. [10]MAHLER J, LIANG J, NIYAZ S, et al. Dex-Net 20: Deep learning to plan robust grasps with synthetic point clouds and analytic grasp metrics [DB/OL]. (2017-08-08) [2018-12-14]. https://arxiv.org/abs/1703.09312. [11]REN S Q, HE K M, GIRSHICK R, et al. Faster R-CNN: Towards real-time object detection with region proposal networks[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2017, 39(6): 1137-1149. [12]MAHLER J, POKORNY F T, HOU B, et al. Dex-Net 1.0: A cloud-based network of 3D objects for robust grasp planning using a Multi-Armed Bandit model with correlated rewards[C]//2016 IEEE International Conference on Robotics and Automation (ICRA). Stockholm, Sweden: IEEE, 2016: 1957-1964. [13]MORRISON D, LEITNER J, CORKE P. Closing the loop for robotic grasping: A real-time, generative grasp synthesis approach [DB/OL]. (2018-05-15) [2018-12-14]. https://arxiv.org/abs/1804.05172.
文章导航

/