Guidance, Navigation and Control

Dynamic Density-Guided Method for Multi-Robot Formation Transformation

  • CAO Kai ,
  • CHEN Yangquan ,
  • LI Kang ,
  • CHEN Chaobo ,
  • YAN Kun ,
  • LIU Weichao
Expand
  • 1. School of Electronic Information Engineering, Xi’an Technological University, Xi’an 710021, China
    2. MESA Lab, University of California, Merced CA 95343, USA

Received date: 2024-06-06

  Revised date: 2024-06-22

  Accepted date: 2024-06-24

  Online published: 2024-07-04

Abstract

This paper addresses the formation control problem for ground mobile robot formations and proposes a formation transition method based on dynamic density guidance. To achieve different formation transitions, a centroidal Voronoi tessellations (CVT) formation control algorithm is utilized to avoid collisions during the transition process. By leveraging the properties of the CVT algorithm, a dynamic density is generated by constructing a transition density function between the initial formation density function and the desired density function. The CVT algorithm then guides the robots in the formation to move and complete the transition and reconstruction of the formation. The simulation results demonstrate that, compared to using the desired density function directly, this method not only successfully resolves certain formation transition failures but also reduces the average positional error of the formation during the transition process.

Cite this article

CAO Kai , CHEN Yangquan , LI Kang , CHEN Chaobo , YAN Kun , LIU Weichao . Dynamic Density-Guided Method for Multi-Robot Formation Transformation[J]. Journal of Shanghai Jiaotong University, 2024 , 58(11) : 1783 -1797 . DOI: 10.16183/j.cnki.jsjtu.2024.209

References

[1] GUO S, LIU B, ZHANG S, et al. Continuous-time gaussian process trajectory generation for multi-robot formation via probabilistic inference[C]//International Conference on Intelligent Robots and Systems. Prague, Czech Republic: IEEE, 2021: 9247-9253.
[2] 王树凤, 张钧鑫, 张俊友. 基于人工势场和虚拟领航者的智能车辆编队控制[J]. 上海交通大学学报, 2020, 54(3): 305-311.
  WANG Shufeng, ZHANG Junxin, ZHANG Junyou. Intelligent vehicles formation control based on artificial potential field and virtual leader[J]. Journal of Shanghai Jiao Tong University, 2020, 54(3): 305-311.
[3] HUANG J, ZHOU S, TU H, et al. Distributed optimization algorithm for multi-robot formation with virtual reference center[J]. Journal of Automatica Sinica, 2022, 9(4): 732-734.
[4] FEOLA L, TRIANNI V. Adaptive strategies for team formation in minimalist robot swarms[J]. IEEE Robotics and Automation Letters, 2022, 7(2): 4079-4085.
[5] REZECK P, CHAIMOWICZ L. Chemistry-inspired pattern formation with robotic swarms[J]. IEEE Robotics and Automation Letters, 2022, 7(4): 9137-9144.
[6] CORTES J, MARTINEZ S, KARATAS T, et al. Coverage control for mobile sensing networks[J]. IEEE Transactions on Robotics and Automation, 2004, 20(2): 243-255.
[7] BREITENMOSER A, SCHWAGER M, METZGER J C, et al. Voronoi coverage of non-convex environments with a group of networked robots[C]//2010 IEEE International Conference on Robotics and Automation. Anchorage, AK, USA: IEEE, 2010: 4982-4989.
[8] KANTAROS Y, THANOU M, TZES A. Distributed coverage control for concave areas by a heterogeneous robot-swarm with visibility sensing constraints[J]. Automatica, 2015, 53: 195-207.
[9] ZHENG L, ZHAO J, CHENG Y, et al. Geometry-constrained crowd formation animation[J]. Computers & Graphics, 2014, 38: 268-276.
[10] TERUEL E, ARAGUES R, LóPEZ-NICOLáS G. A distributed robot swarm control for dynamic region coverage[J]. Robotics and Autonomous Systems, 2019, 119: 51-63.
[11] 郑利平, 程亚军, 周乘龙, 等. 异构群体队形光滑变换控制方法[J]. 计算机辅助设计与图形学学报, 2015, 27(10): 1963-1970.
  ZHENG Liping, CHENG Yajun, ZHOU Chenglong, et al. Research on smooth formation control of heterogeneous crowds[J]. Journal of Computer-Aided Design & Computer Graphics, 2015, 27(10): 1963-1970.
[12] ZHENG X, ZONG C, CHENG J, et al. Visually smooth multi-UAV formation transformation[J]. Graphical Models, 2021, 116: 101111.
[13] BAI Y, WANG Y, XIONG X, et al. Adaptive multi-agent control with dynamic obstacle avoidance in a limited region[C]//American Control Conference. Atlanta, USA: IEEE, 2022: 4695-4700.
[14] XU X, DIAZ-MERCADO Y. Multi-robot control using coverage over time-varying domains[C]//International Symposium on Multi-Robot and Multi-Agent Systems. New Brunswick, USA: IEEE, 2019: 179-181.
[15] XU X, DIAZ-MERCADO Y. Multi-robot control using coverage over time-varying non-convex domains[C]//IEEE International Conference on Robotics and Automation. Paris, France: IEEE, 2020: 4536-4542.
[16] GUO D, BAI Y, SVININ M, et al. Robust adaptive multi-agent coverage control for flood monitoring[C]//International Siberian Conference on Control and Communications. Kazan, Russia: IEEE, 2021: 1-5.
[17] XU X, SHI G, TOKEKAR P, et al. Interactive multi-robot aerial cinematography through hemispherical manifold coverage[C]//International Conference on Intelligent Robots and Systems. Kyoto, Japan: IEEE, 2022: 11528-11534.
[18] TERUEL E, ARAGUES R, LóPEZ-NICOLáS G. A practical method to cover evenly a dynamic region with a swarm[J]. IEEE Robotics and Automation Letters, 2021, 6(2): 1359-1366.
[19] EISENBERGER M, NOVOTNY D, KERCHENBAUM G, et al. Neuromorph: Unsupervised shape interpolation and correspondence in one go[C]//Conference on Computer Vision and Pattern Recognition. Nashville, TN, USA: IEEE, 2021: 7473-7483.
[20] PEYRé G, CUTURI M. Computational optimal transport: With applications to data science[J]. Foundations and Trends in Machine Learning, 2019, 11(5/6): 355-607.
Outlines

/