In critical applications, including search-and-rescue in degraded environments, blockages can be prevalent and prevent the effective deployment of certain sensing modalities, particularly vision, due to occlusion and the constrained range of view of onboard camera sensors. To enable robots to tackle these challenges, we propose a new approach, Proprioceptive Obstacle Detection and Estimation while navigating in clutter (PROBE), that instead utilizes the robot’s proprioception to infer the presence or the absence of occluded planar obstacles while predicting their dimensions and poses in SE(2). As a novel vision-free technique, PROBE simultaneously navigates in cluttered environments and detects on the fly the presence and dimensions of unseen static/movable obstacles entirely through physical contact interactions. PROBE is a Transformer neural network that receives as inputs a history of applied torques and sensed whole-body movements of the robot and returns a parameterized representation of the obstacles in the environment. The effectiveness of PROBE is thoroughly evaluated on simulated environments in Isaac Gym and a real Unitree Go1 quadruped.