PROBE: Proprioceptive Obstacle Detection and Estimation while Navigating in Clutter

Rutgers, The State University of New Jersey

Abstract

In critical applications, including search-and-rescue in degraded environments, blockages can be prevalent and prevent the effective deployment of certain sensing modalities, particularly vision, due to occlusion and the constrained range of view of onboard camera sensors. To enable robots to tackle these challenges, we propose a new approach, Proprioceptive Obstacle Detection and Estimation while navigating in clutter (PROBE), that instead utilizes the robot’s proprioception to infer the presence or the absence of occluded planar obstacles while predicting their dimensions and poses in SE(2). As a novel vision-free technique, PROBE simultaneously navigates in cluttered environments and detects on the fly the presence and dimensions of unseen static/movable obstacles entirely through physical contact interactions. PROBE is a Transformer neural network that receives as inputs a history of applied torques and sensed whole-body movements of the robot and returns a parameterized representation of the obstacles in the environment. The effectiveness of PROBE is thoroughly evaluated on simulated environments in Isaac Gym and a real Unitree Go1 quadruped.

System Pipeline

PROBE Inference Pipeline
Real Robot Trials
For all the demonstrations below the ground truth obstacles are colored in yellow for movable and red for static. The predicted obstacles are colored in orange for movable and blue for static. The robot pose is shown in green.
Category 1: Easy (One movable or one static obstacle)
Category 2: Medium (One static behind one movable)
Category 3: Hard (Two static obstacles behind one movable)