Welcome to join us!
Workshop Summary
Date and Location: TBD, co-located with IEEE VL/HCC (October 6-10, 2025), Raleigh, North Carolina, United States
User interfaces for Human-LLM interaction in information-intensive tasks such as sensemaking have predominantly been limited to the chatbot style of interaction. The linear nature of chat poses significant limitations on sensemaking activity that often requires nonlinear thinking, incremental formalization, use of diverse information sources, etc.
Human cognition in sensemaking tasks exploits a psychological phenomenon called embodied cognition, which externalizes thought processes onto interactive visual workspaces and canvases that enable complex iterative structuring of ideas. These nonlinear structures often represent various forms of visual languages for sensemaking —- space to think.
While LLMs also exploit the notion of external memory in the form of prompt context or chain of thought, these are often limited to sequential text inputs. Is there a possibility of marrying the analogous concepts of human embodied cognition and LLM context? Can the visual languages of embodied cognition serve as a common ground between human cognition and LLM processing? Can LLMs have a form of embodied AI cognition that exploits visual languages? As AI agents become increasingly autonomous, how can visual languages support incremental progression and reduce uncertainty in various sensemaking scenarios, from simple coordination to active collaboration to full teaming, while enabling trust between humans and AI?
This workshop seeks contributions that push these frontiers and bring together a multi-disciplinary group of researchers in HCI, AI, cognitive science, visual languages, and computer vision. The workshop will foster a community that will discuss and prepare a forward-looking research agenda for embodied human-LLM interaction with visual languages.
Topics of Interest
Topics of interest include but are not limited to:
- Design or evaluation of visual languages for AI
- User interface or visualization design for human-AI interaction
- Theories for human-AI common ground in sensemaking
- Methods for human steering or explainability of AI sensemaking
- NLP methods for augmenting LLMs with embodied cognition capabilities
- Models for LLM processing of visual languages
- Evaluations of human-AI teaming for sensemaking
Call For Participation
Two types of contributions are sought, each with separate deadlines and review criteria.
Research Papers: Manuscripts that contain results suitable for publication, which will be presented at the workshop. Research paper submissions will be peer-reviewed by a minimum of one organizer and one knowledgeable external reviewer recruited by the organizers to ensure relevance, quality, and soundness of results. A length of 4-8 pages is required. The VL/HCC Conference plans to publish accepted workshop manuscripts in an accompanying volume published by IEEE.
Submission deadline: Friday, July 11
Acceptance notification: Friday, August 1
Camera-ready deadline following minor revisions: Friday, August 8
Submissions should be uploaded to EasyChair: TBA
Position Statements: Statements relating to a topic of interest that can be discussed at the workshop. Position statements will be reviewed by the organizers to ensure relevance to the workshop. A length of 1-4 pages is required.
Submission deadline: Friday, August 22
Acceptance notification: Friday, August 29
Submission method: TBA
WORKSHOP SCHEDULE
Session: 9:00am-12:30pm EDT
- (9:00–9:10) Welcome and Charge
- (9:10–10:10) Paper presentations or demos
- (10:10–10:40) Lightning talks/position statements
- (10:40–11:00) Break
- (11:00–11:40) Group brainstorming session
- (11:40–12:20) Collaborative affinity diagramming
- (12:20–12:30) Closing and action items
ORGANIZERS
Chris North, Virginia Tech (Contact: human-ai-sensemaking@googlegroups.com)
Joel Chan, University of Maryland
Rebecca Faust, Tulane University
Xuxin Tang, Virginia Tech
Xuan Wang, Virginia Tech
John Wenskovitch, Pacific Northwest National Laboratory
Siyi Zhu, University of Maryland