Skip to main content

Showing 1–10 of 10 results for author: Seon, J

.
  1. arXiv:2604.05525  [pdf, ps, other

    cs.GR

    CrowdVLA: Embodied Vision-Language-Action Agents for Context-Aware Crowd Simulation

    Authors: Juyeong Hwang, Seong-Eun Hong, Jinhyun Kim, JaeYoung Seon, Giljoo Nam, Hanyoung Jang, HyeongYeop Kang

    Abstract: Crowds do not merely move; they decide. Human navigation is inherently contextual: people interpret the meaning of space, social norms, and potential consequences before acting. Sidewalks invite walking, crosswalks invite crossing, and deviations are weighed against urgency and safety. Yet most crowd simulation methods reduce navigation to geometry and collision avoidance, producing motion that is… ▽ More

    Submitted 7 April, 2026; originally announced April 2026.

  2. arXiv:2602.17738  [pdf, ps, other

    cs.MA cs.IT

    Reasoning-Native Agentic Communication for 6G

    Authors: Hyowoon Seo, Joonho Seon, Jin Young Kim, Mehdi Bennis, Wan Choi, Dong In Kim

    Abstract: Future 6G networks will interconnect not only devices, but autonomous machines that continuously sense, reason, and act. In such environments, communication can no longer be understood solely as delivering bits or even preserving semantic meaning. Even when two agents interpret the same information correctly, they may still behave inconsistently if their internal reasoning processes evolve differe… ▽ More

    Submitted 18 February, 2026; originally announced February 2026.

    Comments: 8 pages 4 figures

  3. arXiv:2602.04292  [pdf, ps, other

    cs.GR

    Event-T2M: Event-level Conditioning for Complex Text-to-Motion Synthesis

    Authors: Seong-Eun Hong, JaeYoung Seon, JuYeong Hwang, JongHwan Shin, HyeongYeop Kang

    Abstract: Text-to-motion generation has advanced with diffusion models, yet existing systems often collapse complex multi-action prompts into a single embedding, leading to omissions, reordering, or unnatural transitions. In this work, we shift perspective by introducing a principled definition of an event as the smallest semantically self-contained action or state change in a text prompt that can be tempor… ▽ More

    Submitted 4 February, 2026; originally announced February 2026.

    Comments: 28 pages, 7 figures. Accepted to ICLR 2026

  4. How Does a Virtual Agent Decide Where to Look? Symbolic Cognitive Reasoning for Embodied Head Rotation

    Authors: Juyeong Hwang, Seong-Eun Hong, JaeYoung Seon, Hyeongyeop Kang

    Abstract: Natural head rotation is critical for believable embodied virtual agents, yet this micro-level behavior remains largely underexplored. While head-rotation prediction algorithms could, in principle, reproduce this behavior, they typically focus on visually salient stimuli and overlook the cognitive motives that guide head rotation. This yields agents that look at conspicuous objects while overlooki… ▽ More

    Submitted 6 January, 2026; v1 submitted 12 August, 2025; originally announced August 2025.

    Comments: 13 pages, 8 figures. Accepted to SIGGRAPH Asia Conference Papers '25

    Journal ref: SIGGRAPH Asia Conference Papers '25, December 15-18, 2025, Hongkong

  5. arXiv:2407.14059  [pdf, other

    cs.CV

    Regularizing Dynamic Radiance Fields with Kinematic Fields

    Authors: Woobin Im, Geonho Cha, Sebin Lee, Jumin Lee, Juhyeong Seon, Dongyoon Wee, Sung-Eui Yoon

    Abstract: This paper presents a novel approach for reconstructing dynamic radiance fields from monocular videos. We integrate kinematics with dynamic radiance fields, bridging the gap between the sparse nature of monocular videos and the real-world physics. Our method introduces the kinematic field, capturing motion through kinematic quantities: velocity, acceleration, and jerk. The kinematic field is joint… ▽ More

    Submitted 19 July, 2024; originally announced July 2024.

    Comments: ECCV 2024

  6. arXiv:2406.06163  [pdf, other

    cs.CV

    Extending Segment Anything Model into Auditory and Temporal Dimensions for Audio-Visual Segmentation

    Authors: Juhyeong Seon, Woobin Im, Sebin Lee, Jumin Lee, Sung-Eui Yoon

    Abstract: Audio-visual segmentation (AVS) aims to segment sound sources in the video sequence, requiring a pixel-level understanding of audio-visual correspondence. As the Segment Anything Model (SAM) has strongly impacted extensive fields of dense prediction problems, prior works have investigated the introduction of SAM into AVS with audio as a new modality of the prompt. Nevertheless, constrained by SAM'… ▽ More

    Submitted 10 June, 2024; originally announced June 2024.

    Comments: Accepted to ICIP 2024

  7. arXiv:2403.07773  [pdf, other

    cs.CV

    SemCity: Semantic Scene Generation with Triplane Diffusion

    Authors: Jumin Lee, Sebin Lee, Changho Jo, Woobin Im, Juhyeong Seon, Sung-Eui Yoon

    Abstract: We present "SemCity," a 3D diffusion model for semantic scene generation in real-world outdoor environments. Most 3D diffusion models focus on generating a single object, synthetic indoor scenes, or synthetic outdoor scenes, while the generation of real-world outdoor scenes is rarely addressed. In this paper, we concentrate on generating a real-outdoor scene through learning a diffusion model on a… ▽ More

    Submitted 17 March, 2024; v1 submitted 12 March, 2024; originally announced March 2024.

    Comments: Accepted to CVPR 2024

  8. Predicting challenge moments from students' discourse: A comparison of GPT-4 to two traditional natural language processing approaches

    Authors: Wannapon Suraworachet, Jennifer Seon, Mutlu Cukurova

    Abstract: Effective collaboration requires groups to strategically regulate themselves to overcome challenges. Research has shown that groups may fail to regulate due to differences in members' perceptions of challenges which may benefit from external support. In this study, we investigated the potential of leveraging three distinct natural language processing models: an expert knowledge rule-based model, a… ▽ More

    Submitted 3 January, 2024; originally announced January 2024.

    Comments: 13 pages, 1 figure

  9. arXiv:2310.06486  [pdf, other

    cs.AI cs.CV cs.IR

    Topological RANSAC for instance verification and retrieval without fine-tuning

    Authors: Guoyuan An, Juhyung Seon, Inkyu An, Yuchi Huo, Sung-Eui Yoon

    Abstract: This paper presents an innovative approach to enhancing explainable image retrieval, particularly in situations where a fine-tuning set is unavailable. The widely-used SPatial verification (SP) method, despite its efficacy, relies on a spatial model and the hypothesis-testing strategy for instance recognition, leading to inherent limitations, including the assumption of planar structures and negle… ▽ More

    Submitted 10 October, 2023; originally announced October 2023.

  10. A Numerical Method to Analyze Geometric Factors of a Space Particle Detector Relative to Omnidirectional Proton and Electron Fluxes

    Authors: Sungmin Pak, Yuchul Shin, Ju Woo, Jongho Seon

    Abstract: A numerical method is proposed to calculate the response of detectors measuring particle energies from incident isotropic fluxes of electrons and positive ions. The isotropic flux is generated by injecting particles moving radially inward on a hypothetical, spherical surface encompassing the detectors. A geometric projection of the field-of-view from the detectors onto the spherical surface allows… ▽ More

    Submitted 1 September, 2018; originally announced September 2018.

    Comments: 7 pages, 8 figures and 1 table

    Journal ref: JKAS 51 (2018) 111-117