We introduce a novel framework for reconstructing dynamic human-object interactions from monocular video that overcomes challenges associated with occlusions and temporal inconsistencies. Traditional 3D reconstruction methods typically assume static objects or full visibility of dynamic subjects, leading to degraded performance when these assumptions are violated-particularly in scenarios where mutual occlusions occur. To address this, our framework leverages amodal completion to infer the complete structure of partially obscured regions. Unlike conventional approaches that operate on individual frames, our method integrates temporal context, enforcing coherence across video sequences to incrementally refine and stabilize reconstructions. This template-free strategy adapts to varying conditions without relying on predefined models, significantly enhancing the recovery of intricate details in dynamic scenes. We validate our approach using 3D Gaussian Splatting on challenging monocular videos, demonstrating superior precision in handling occlusions and maintaining temporal stability compared to existing techniques.
Our Occlusion-aware, Temporally Consistent Amodal Completion framework enables photo-realistic and animatable 3D Human–Object interaction reconstructions from monocular video using 3D Gaussian Splatting.
Conditioned on motion trajectories from the input video, our method enables realistic animation of Novel Human–Object pairs while preserving geometry, appearance, and temporal coherence.
@article{doh2025occlusion,
title={Occlusion-Aware Temporally Consistent Amodal Completion for 3D Human-Object Interaction Reconstruction},
author={Doh, Hyungjun and Lee, Dong In and Chi, Seunggeun and Huang, Pin-Hao and Lee, Kwonjoon and Kim, Sangpil and Ramani, Karthik},
journal={arXiv preprint arXiv:2507.08137},
year={2025}
}