Publications
2025
- ACM MM 2025Temporally Consistent Amodal Completion for 3D Human-Object Interaction ReconstructionHyungjun Doh, Dong In Lee, Seunggeun Chi, Pin-Hao Huang, Kwonjoon Lee, Sangpil Kim, and Karthik RamaniProceedings of the 33rd ACM International Conference on Multimedia, 2025
We introduce a novel framework for reconstructing dynamic human-object interactions from monocular video that overcomes challenges associated with occlusions and temporal inconsistencies. Traditional 3D reconstruction methods typically assume static objects or full visibility of dynamic subjects, leading to degraded performance when these assumptions are violated-particularly in scenarios where mutual occlusions occur. To address this, our framework leverages amodal completion to infer the complete structure of partially obscured regions. Unlike conventional approaches that operate on individual frames, our method integrates temporal context, enforcing coherence across video sequences to incrementally refine and stabilize reconstructions. This template-free strategy adapts to varying conditions without relying on predefined models, significantly enhancing the recovery of intricate details in dynamic scenes. We validate our approach using 3D Gaussian Splatting on challenging monocular videos, demonstrating superior precision in handling occlusions and maintaining temporal stability compared to existing techniques.
- CHI 2025CARING-AI: Towards Authoring Context-aware Augmented Reality INstruction through Generative Artificial IntelligenceJingyu Shi, Rahul Jain, Seunggeun Chi, Hyungjun Doh, Hyung-gun Chi, Alexander J Quinn, and Karthik RamaniIn Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems, 2025
Context-aware AR instruction enables adaptive and in-situ learning experiences. However, hardware limitations and expertise requirements constrain the creation of such instructions. With recent developments in Generative Artificial Intelligence (Gen-AI), current research tries to tackle these constraints by deploying AI-generated content (AIGC) in AR applications. However, our preliminary study with six AR practitioners revealed that the current AIGC lacks contextual information to adapt to varying application scenarios and is therefore limited in authoring. To utilize the strong generative power of GenAI to ease the authoring of AR instruction while capturing the context, we developed CARING-AI, an AR system to author context-aware humanoid-avatar-based instructions with GenAI. By navigating in the environment, users naturally provide contextual information to generate humanoid-avatar animation as AR instructions that blend in the context spatially and temporally. We showcased three application scenarios of CARING-AI: Asynchronous Instructions, Remote Instructions, and Ad Hoc Instructions based on a design space of AIGC in AR Instructions. With two user studies (N=12), we assessed the system usability of CARING-AI and demonstrated the easiness and effectiveness of authoring with Gen-AI.
- TCVGVisualizing Causality in Mixed Reality for Manual Task Learning: A StudyRahul Jain, Jingyu Shi, Andrew Benton, Moiz Rasheed, Hyungjun Doh, Subramanian Chidambaram, and Karthik RamaniIEEE Transactions on Visualization and Computer Graphics, 2025
Mixed Reality (MR) is gaining prominence in manual task skill learning due to its in-situ, embodied, and immersive experience. To teach manual tasks, current methodologies break the task into hierarchies (tasks into subtasks) and visualize not only the current subtasks but also the future ones that are causally related. We investigate the impact of visualizing causality within an MR framework on manual task skill learning. We conducted a user study with 48 participants, experimenting with how presenting tasks in hierarchical causality levels (no causality, event-level, interaction-level, and gesture-level causality) affects user comprehension and performance in a complex assembly task. The research finds that displaying all causality levels enhances user understanding and task execution, with a compromise of learning time. Based on the results, we further provide design recommendations and in-depth discussions for future manual task learning systems.
2023
- Under Review[Under Review] An HCI-centric survey and taxonomy of human-generative-AI interactionsJingyu Shi, Rahul Jain, Hyungjun Doh, Ryo Suzuki, and Karthik RamaniarXiv preprint arXiv:2310.07127, 2023
Generative AI (GenAI) has shown remarkable capabilities in generating diverse and realistic content across different formats like images, videos, and text. In Generative AI, human involvement is essential, thus HCI literature has investigated how to effectively create collaborations between humans and GenAI systems. However, the current literature lacks a comprehensive framework to better understand Human-GenAI Interactions, as the holistic aspects of human-centered GenAI systems are rarely analyzed systematically. In this paper, we present a survey of 291 papers, providing a novel taxonomy and analysis of Human-GenAI Interactions from both human and Gen-AI perspectives. The dimensions of design space include 1) Purposes of Using Generative AI, 2) Feedback from Models to Users, 3) Control from Users to Models, 4) Levels of Engagement, 5) Application Domains, and 6) Evaluation Strategies. Our work is also timely at the current development stage of GenAI, where the Human-GenAI interaction design is of paramount importance. We also highlight challenges and opportunities to guide the design of Gen-AI systems and interactions towards the future design of human-centered Generative AI applications.