REASAN: Learning Reactive Safe Navigation for Legged Robots

*University of Groningen Linköping University

Abstract

We present a novel modularized end-to-end framework for legged reactive navigation in complex dynamic environments using a single light detection and ranging (LiDAR) sensor. The system comprises four simulation-trained modules: three reinforcement-learning (RL) policies for locomotion, safety shielding, and navigation, and a transformer-based exteroceptive estimator that processes raw point-cloud inputs. This modular decomposition of complex legged motor-control tasks enables lightweight neural networks with simple architectures, trained using standard RL practices with targeted reward shaping and curriculum design, without reliance on heuristics or sophisticated policy-switching mechanisms. We conduct comprehensive ablations to validate our design choices and demonstrate improved robustness compared to existing approaches in challenging navigation tasks. The resulting reactive safe navigation (REASAN) system achieves fully onboard and real-time reactive navigation across both single- and multi-robot settings in complex environments.

System Pipeline

Multi-robot tests

Multi-robot reactive navigation

Multi-robot reactive navigation in dynamic environments

More Experiments

Static Obstacles

Dynamic Obstacles

Ball Avoidance

Low-Light Environment

Dead-End Navigation

Citation

@article{yuan2024reasan,
  title={REASAN: Learning Reactive Safe Navigation for Legged Robots}, 
  author={Yuan, Qihao and Cao, Ziyu and Cao, Ming and Li, Kailai},
  journal={arXiv preprint arXiv:2512.09537},
  year={2025}
}