Automation System & Theory

Adaptive Human-Robot Collaboration Control Based on Optimal Admittance Parameters

Expand
  • (College of Information Engineering, Zhejiang University of Technology, Hangzhou 310023, China)

Received date: 2020-12-25

  Online published: 2022-09-03

Abstract

In order to help the operator perform the human-robot collaboration task and optimize the task performance, an adaptive control method based on optimal admittance parameters is proposed. The overall control structure with the inner loop and outer loop is first established. The tasks of the inner loop and outer loop are robot control and task optimization, respectively. Then an inner-loop robot controller integrated with barrier Lyapunov function and radial basis function neural networks is proposed, which makes the robot with unknown dynamics securely behave like a prescribed robot admittance model sensed by the operator. Subsequently, the optimal parameters of the robot admittance model are obtained in the outer loop to minimize the task tracking error and interaction force. The optimization problem of the robot admittance model is transformed into a linear quadratic regulator problem by constructing the human-robot collaboration system model. The model includes the unknown dynamics of the operator and the task performance details. For relaxing the requirement of the system model, the integral reinforcement learning is employed to solve the linear quadratic regulator problem. Besides, an auxiliary force is designed to help the operator complete the specific task better. Compared with the traditional control scheme, the security performance and interaction performance of the human-robot collaboration system are improved. The effectiveness of the proposed method is verified through two numerical simulations. In addition, a practical human-robot collaboration experiment is carried out to demonstrate the performance of the proposed method.

Cite this article

YU Xinyi (禹鑫燚), WU Jiaxin (吴加鑫), XU Chengjun (许成军), LUO Huizhen (罗惠珍), OU Linlin∗ (欧林林) . Adaptive Human-Robot Collaboration Control Based on Optimal Admittance Parameters[J]. Journal of Shanghai Jiaotong University(Science), 2022 , 27(5) : 589 -601 . DOI: 10.1007/s12204-022-2460-3

References

[1] ZHANG M, MCDAID A, VEALE A J, et al. Adaptive trajectory tracking control of a parallel ankle rehabilitation robot with joint-space force distribution [J]. IEEE Access, 2019, 7: 85812-85820. [2] SHIMMURA T, ICHIKARI R, OKUMA T. Human– robot hybrid service system introduction for enhancing labor and robot productivity [M]//Advances in production management systems. Cham: Springer, 2020: 661-669. [3] ROVEDA L, PALLUCCA G, PEDROCCHI N, et al. Iterative learning procedure with reinforcement for high-accuracy force tracking in robotized tasks [J]. IEEE Transactions on Industrial Informatics, 2018, 14(4): 1753-1763. [4] HOGAN N. Impedance control: An approach to manipulation: Part II—Implementation [J]. Journal of Dynamic Systems, Measurement, and Control, 1985, 107(1): 8-16. [5] DOHRING M, NEWMAN W. The passivity of natural admittance control implementations [C]//IEEE International Conference on Robotics and Automation. Taipei, China: IEEE, 2003: 3710-3715. [6] RIENER R, LUNENBURGER L, JEZERNIK S, et al. Patient-cooperative strategies for robot-aided treadmill training: First experimental results [J]. IEEE Transactions on Neural Systems and Rehabilitation Engineering, 2005, 13(3): 380-394. [7] MAITHANI H, CORRALES-RAMON J A, MEZOUAR Y. Trust-based variable impedance control for cooperative physical human-robot interaction [C]//IEEE International Conference on Mechatronics. Ilmenau, Germany: IEEE, 2019: 706-711. [8] MODARES H, RANATUNGA I, LEWIS F L, et al. Optimized assistive human-robot interaction using reinforcement learning [J]. IEEE Transactions on Cybernetics, 2016, 46(3):655-667. [9] WANG C, LI Y, GE S S, et al. Continuous critic learning for robot control in physical human-robot interaction [C]//13th International Conference on Control, Automation and Systems. Gwangju, Korea: IEEE, 2013: 833-838. [10] TEE K P, GE S S, TAY E H. Barrier Lyapunov Functions for the control of output-constrained nonlinear systems [J]. Automatica, 2009, 45(4): 918-927. [11] ZHANG S, DONG Y, OUYANG Y, et al. Adaptive neural control for robotic manipulators with output constraints and uncertainties [J]. IEEE Transactions on Neural Networks and Learning Systems, 2018, 29(11): 5554-5564. [12] YU X, LI Y, ZHANG S, et al. Estimation of human impedance and motion intention for constrained human-robot interaction [J]. Neurocomputing, 2020, 390: 268-279. [13] HE W, XUE C, YU X, et al. Admittance-based controller design for physical human-robot interaction in the constrained task space [J]. IEEE Transactions on Automation Science and Engineering, 2020, 17(4):1937-1949. [14] LI J, YOU B, DING L, et al. A novel bilateral haptic teleoperation approach for hexapod robot walking and manipulating with legs [J]. Robotics and Autonomous Systems, 2018, 108: 1-12. [15] CREMER S, DAS S K, WIJAYASINGHE I B, et al. Model-free online neuroadaptive controller with intent estimation for physical human–robot interaction [J]. IEEE Transactions on Robotics, 2020, 36(1): 240-253. [16] WANG Y, LIN Q, ZHOU L, et al. Adaptive radial basis function neural network control of a wire-driven parallel robot based on local model approximation [J]. Control Theory & Applications, 2021, 38(3): 380-390 (in Chinese). [17] YOU B, LI J, DING L, et al. Semi-autonomous bilateral teleoperation of hexapod robot based on haptic force feedback [J]. Journal of Intelligent & Robotic Systems, 2018, 91(3/4): 583-602. [18] LI Y, TEE K P, CHAN W L, et al. Continuous role adaptation for human-robot shared control [J]. IEEE Transactions on Robotics, 2015, 31(3): 672-681. [19] LI Y, GE S S. Human-robot collaboration based on motion intention estimation [J]. IEEE/ASME Transactions on Mechatronics, 2014, 19(3): 1007-1014. [20] YANG C, PENG G, LI Y, et al. Neural networks enhanced adaptive admittance control of optimized robot-environment interaction [J]. IEEE Transactions on Cybernetics, 2019, 49(7): 2568-2579. [21] FURUTA K, KADO Y, SHIRATORI S. Assisting control in human adaptive mechatronics: Single ball juggling [C]//IEEE Conference on Computer Aided Control System Design. Munich, Germany: IEEE, 2006: 545-550. [22] JIANG Y, JIANG Z. Computational adaptive optimal control for continuous-time linear systems with completely unknown dynamics [J]. Automatica, 2012, 48(10): 2699-2704. [23] BIAN T, JIANG Z. Value iteration and adaptive dynamic programming for data-driven adaptive optimal control design [J]. Automatica, 2016, 71: 348-360. [24] RIZVI S, LIN Z. Reinforcement learning-based linear quadratic regulation of continuous-time systems using dynamic output feedback [J]. IEEE Transactions on Cybernetics, 2019, 50(11): 4670-4679. [25] ZHEN H, FANG Z. Research on tracking error of robot arm based on adaptive neural network control [J]. Machinery Design & Manufacture, 2019(6): 139-141 (in Chinese).
Outlines

/