Overview

Personalization is key in understanding user behavior and has been a main focus in the fields of knowledge discovery and information retrieval. Building personalized recommender systems is especially important now due to the vast amount of user-generated textual content, which offers deep insights into user preferences. The recent advancements in Large Language Models (LLMs) have significantly impacted research areas, mainly in Natural Language Processing and Knowledge Discovery, giving these models the ability to handle complex tasks and learn context.

However, the use of generative models and user-generated text for personalized systems and recommendation is relatively new and has shown some promising results. This workshop is designed to bridge the research gap in these fields and explore personalized applications and recommender systems. We aim to fully leverage generative models to develop AI systems that are not only accurate but also focused on meeting individual user needs. Building upon the momentum of previous successful forums, this workshop seeks to engage a diverse audience from academia and industry, fostering a dialogue that incorporates fresh insights from key stakeholders in the field.

Call for papers

We will welcome papers that leverage generative models with a goal of recommendation and personalization on several topics including but not limited to those mentioned in CFP. Papers can be submitted via OpenReview. Submissions may be 2-8 pages (excluding references and supplementary materials).

Information for the day of the workshop

Workshop at WSDM 2026

  • Submission deadline: November 21, 2025 November 28, 2025
  • Author notifications: December 18, 2025
  • Meeting: February 26, 2026

Schedule

Time (MST) Agenda
8:55 - 9:00am Opening remarks
9:00 - 9:45am Keynote by Dr. Meng Jiang (45 min)
9:45 - 10:30am Keynote by Dr. Yinglong Xia (45 min)
10:30 - 11:00am Coffee Break (30 min)
11:00 - 11:45am Oral Session 1 (45 min) - Paper Authors
- AGP: Auto-Guided Prompt Refinement for Personalized Reranking in Recommender Systems
- InsertRank: LLMs can Reason over BM25 Scores to Improve Listwise Reranking
- Selective LLM-Guided Regularization for Enhancing Recommendation Models
11:45am - 12:30pm Keynote by Dr. Chirag Shah (45 min)
12:30 - 1:45pm Lunch (75 min)
1:45 - 2:30pm Keynote by Dr. Nathan Kallus (45 min)
2:30 - 3:30pm Oral Session 2 (60 min) - Paper Authors
- Joint Evaluation: A Human+LLM+Multi-Agents Collaborative Framework for Comprehensive AI Safety
- Multi-Agent Video Recommenders: Evolution, Patterns and Open Challenges
- Large-Scale Retrieval for the LinkedIn Feed using Causal Language Models
- Agentic Orchestration for Adaptive Educational Recommendations: A Multi-Agent LLM Framework for Personalized Learning Pathways
3:30 - 4:00pm Coffee Break (30 min)
4:00 - 4:20pm [Invited Talk by Liam Collins] (20 min)
Sequential Data Augmentation for Generative Recommendation
4:20 - 4:40pm [Invited Talk by Chengyi Liu] (20 min)
Continuous Time Discrete-space Diffusion Model for Recommendation
4:40 - 4:50pm Closing remarks

Keynote Speakers

Chirag Shah

Chirag Shah

University of Washington
Beyond the Personalization-Privacy Pareto: Can AI Agents Break the Tradeoff?

Abstract
Abstract: The tension between personalization and privacy has long been treated as an immutable tradeoff—more of one necessarily means less of the other. But as we move from reactive AI systems to proactive AI agents that act on our behalf, this calculus may be shifting. This keynote examines whether agentic AI architectures—with their capacity for local reasoning, on-device memory, and user-controlled delegation—offer a path beyond the Pareto frontier. I explore emerging paradigms where agents can deliver deeply personalized experiences while minimizing data exposure, and discuss the technical, ethical, and governance challenges that remain. The talk concludes with a research agenda for building AI agents that treat privacy not as a constraint on personalization, but as a design principle that enables it.
Bio
Bio: Chirag Shah is a Professor of Information and Computer Science at University of Washington (UW) in Seattle. He is the Founding Director for InfoSeeking Lab and Founding Co-Director of Center for Responsibility in AI Systems & Experiences (RAISE). His research focuses on building, auditing, and correcting agentic information access systems. Shah is a Distinguished Member of ACM as well as ASIS&T, and a Senior Member of IEEE. He has published nearly 200 peer-reviewed articles and authored ten books, including textbooks on data science and machine learning. His new book is called ‘Agent Nation: How Autonomous AI Is Rewriting the Rules of Society—and What We Can Do About It’—coming out this spring.
                                                                                                                                                                                               
Yinglong Xia

Yinglong Xia

Meta
The Graph Trilogy: Unlocking Personalization at Scale

Abstract
Abstract: This talk will present the graph trilogy techniques to overcome the persistent personalization gap in large-scale recommendation systems, arguing that large AI models alone cannot solve challenge of effectively modeling billions of unique user preferences. The core solution involves establishing a Context Graph to capture richer, multi-hop, and structural relationships, thereby transforming traditional engagement data into actionable intelligence for discovery and diversity. To effectively leverage this structural information, the presentation presents co-modeling the user-centric heterogeneous graph with a ranking model. Furthermore, to scale personalization by injecting this complex user knowledge into LLMs, the paper introduces the structural mixture of residual experts that utilizes a structural MoE to exponentially boost model capacity and expressive power while maintaining the parameter efficiency required for industrial-scale deployment.
Bio
Bio: Yinglong Xia is an Applied Research Scientist at Meta Recommendation System (MRS), Meta Platforms, where he focuses on advancing research and development in large-scale recommendation systems and knowledge-driven AI, driving innovation and delivering impactful solutions that enhance Meta’s products and user experiences. Prior to that, he was a chief architect at Huawei US on the Enterprise AI and Research Staff Member at IBM Watson Research Center. He published 100+ technical papers and filed 40+ patents, serving as an AE for IEEE TBD, an ADS area chair at KDD 2025, an industry co-chair of CIKM 2024 and WSDM 2026.
                                                                                                                                                                                               
Meng Jiang

Meng Jiang

University of Notre Dame
What Does It Mean to Personalize a Language Model? Insights from RecSys to LLMs

Abstract
Abstract: Personalization has been a foundational concept in recommender systems (RecSys) since the emergence of collaborative filtering, where models adapt predictions to individual users based on historical interactions. This paradigm has rapidly extended to large language models (LLMs), giving rise to personalized LLMs that tailor textual responses, reasoning styles, and interaction behaviors to individual users. Beyond improved user experience, personalization in LLMs opens new opportunities related to data privacy, user control, and model ownership. This shift raises fundamental questions about how personalization should be defined, implemented, and evaluated in the context of general-purpose language models. In this keynote, I trace the evolution of personalization from classical RecSys to personalized LLMs, highlighting both conceptual continuities and critical departures between these paradigms. I then introduce emerging technical approaches (particularly parameter-efficient adaptation methods) that enable LLMs to incorporate user-specific signals at scale. Finally, I present a recent study to link the two technologies to answer if personalized LLMs are already strong recommender systems.
Bio
Bio: Meng Jiang is a Frank M. Freimann Collegiate Associate Professor of Computer Science and Engineering at the University of Notre Dame. He is appointed as the Director of Foundation Models at the Lucy Family Institute for Data and Society as well as the Program Chair of ND-IBM Tech Ethics Lab. His research interests are data mining, machine learning, and natural language processing for AI applications in science, engineering, social media, education, and healthcare. His recent projects focus on knowledge augmentation, instruction tuning, multi-objective alignment, reasoning and verification, personalization, and machine unlearning. He has delivered 15 conference tutorials and organized ten workshops on these topics. He has received five best paper or outstanding paper awards and the NSF CAREER award.
                                                                                                                                                                                               
Nathan Kallus

Nathan Kallus

Cornell Tech, Netflix
LLM Post-Training and Reasoning via Efficient Value-Based RL

Abstract
Abstract: Reinforcement learning (RL) has a newfound killer application in post-training LLMs pre-trained to predict next token to adapt to tasks like instruction following, math-problem solving, and generating content or recommendations that maximize user outcomes. But are the same RL algorithms that animated robots and conquered Atari the right ones to post-train LLMs? In this talk I will present new value-based algorithms for post-training and for scaling test-time compute that leverage both the unique structure of autoregressive LLMs and recent advances on increasing efficiency by changing the Q-learning loss function. I will show how (and argue why) these new algorithms achieve state-of-the-art performance on frontier math reasoning tasks with smaller models and at a fraction of test-time FLOPs.
Bio
Bio: Nathan Kallus is an Associate Professor at the Cornell Tech campus of Cornell University in NYC and Director of Machine Learning and Inference Research at Netflix. Nathan’s research interests include causal machine learning, sequential and dynamic decision making, optimization under uncertainty, and algorithmic fairness.
                                                                                                                                                                                               

Accepted Papers

  • InsertRank LLMs can Reason over BM25 Scores to Improve Listwise Reranking
    Rahul Seetharaman
    Abstract
    Abstract: Large Language Models LLMs have demonstrated significant strides across various information retrieval tasks particularly as rerankers owing to their strong generalization and knowledge transfer capabilities acquired from extensive pretraining. In parallel the rise of LLM based chat interfaces has raised user expectations encouraging users to pose more complex queries that necessitate retrieval by reasoning over documents rather than through simple keyword matching or semantic similarity. While some recent efforts have exploited reasoning abilities of LLMs for reranking such queries considerable potential for improvement remains. In that regards we introduce InsertRank an LLM based reranker that leverages lexical signals like BM25 scores during reranking to further improve retrieval performance. InsertRank demonstrates improved retrieval effectiveness on BRIGHT a reasoning benchmark spanning 12 diverse domains and R2MED a specialized medical reasoning retrieval benchmark spanning 8 different tasks. We conduct an exhaustive evaluation and several ablation studies and demonstrate that InsertRank consistently improves retrieval effectiveness across multiple families of LLMs including GPT Gemini and Deepseek models.
    PDF Code
  • Selective LLM Guided Regularization for Enhancing Recommendation Models
    Zhan Shi, Shanglin Yang
    Abstract
    Abstract: Large language models LLMs provide rich semantic priors and strong reasoning capabilities making them promising auxiliary signals for recommendation. However prevailing approaches either deploy LLMs as standalone recommenders or apply global knowledge distillation both of which suffer from inherent drawbacks. Standalone LLM recommenders are costly biased and unreliable across large regions of the user item space while global distillation forces the downstream model to imitate LLM predictions even when such guidance is inaccurate. Meanwhile recent studies show that LLMs excel particularly in re ranking and challenging scenarios rather than uniformly across all contexts. We introduce Selective LLM Guided Regularization S LLMR a model agnostic and computation efficient framework that activates LLM based pairwise ranking supervision only when a trainable gating mechanism informed by user history length item popularity and model uncertainty predicts the LLM to be reliable. All LLM scoring is done offline transferring knowledge without increasing inference cost. Experiments across multiple datasets show that this selective strategy consistently improves overall accuracy and yields substantial gains in cold start and long tail regimes outperforming global distillation baselines.
    PDF Code
  • AGP Auto Guided Prompt Refinement for Personalized Reranking in Recommender Systems
    Chen Wang, Mingdai Yang, Zhiwei Liu, Pan Li, Linsey Pang, Qingsong Wen, Philip S Yu
    Abstract
    Abstract: Reranking plays a critical role in recommendation systems by refining initial predictions to better reflect user preferences. While large language models LLMs have shown promise in enhancing reranking through contextual reasoning they still rely heavily on manually crafted prompts an approach that is both labor intensive and difficult to scale. Although prompt optimization has been studied in domains like question answering and news recommendation its adaptation to general item recommendation remains limited due to the unstructured and inconsistent nature of item metadata. To address these challenges we propose Auto Guided Prompt Refinement AGP a novel framework that automatically refines user profile generation prompts instead of reranking prompts directly. AGP leverages position based feedback which encodes item level ranking misalignments and introduces batched training with aggregated feedback to ensure robust and generalizable prompt updates. Experimental results on Amazon Movies and TV Yelp and Goodreads demonstrate AGPs effectiveness. With only 100 training users AGP improves NDCG at 10 by 5.61 percent 2.46 percent and 6.18 percent when reranking SASRec and by 9.36 percent 7.98 percent and 20.68 percent when reranking LightGCN.
    PDF Code
  • Joint Evaluation Framework for Comprehensive AI Safety Assessment
    Himanshu Joshi, Shivani Shukla, Priyanka Kumar
    Abstract
    Abstract: Evaluating the safety and alignment of AI systems remains a critical challenge as foundation models grow increasingly sophisticated. Traditional evaluation methods rely heavily on human expert review creating bottlenecks that cannot scale with rapid AI development. We introduce Jo E Joint Evaluation a multi agent collaborative framework that systematically coordinates large language model evaluators specialized adversarial agents and strategic human expert involvement for comprehensive safety assessments. Our framework employs a five phase evaluation pipeline with explicit mechanisms for conflict resolution severity scoring and adaptive escalation. Through extensive experiments on GPT 4o Claude 3.5 Sonnet Llama 3.1 70B and Phi 3 medium we demonstrate that Jo E achieves 94.2 percent detection accuracy compared to 78.3 percent for single LLM as Judge approaches and 86.1 percent for Agent as Judge baselines while reducing human expert time by 54 percent compared to pure human evaluation.
    PDF Code
  • Multi Agent Video Recommenders Evolution Patterns and Open Challenges
    Srivaths Ranganathan, Abhishek Dharmaratnakar, Anushree Sinha, Debanshu Das
    Abstract
    Abstract: Video recommender systems are among the most popular and impactful applications of AI shaping content consumption and influencing culture for billions of users. Traditional single model recommenders which optimize static engagement metrics are increasingly limited in addressing the dynamic requirements of modern platforms. In response multi agent architectures are redefining how video recommender systems serve learn and adapt to both users and datasets. These agent based systems coordinate specialized agents responsible for video understanding reasoning memory and feedback to provide precise explainable recommendations. In this survey we trace the evolution of multi agent video recommendation systems MAVRS. We combine ideas from multi agent recommender systems foundation models and conversational AI culminating in the emerging field of large language model LLM powered MAVRS. We present a taxonomy of collaborative patterns and analyze coordination mechanisms across diverse video domains ranging from short form clips to educational platforms.
    PDF Code
  • Large-Scale Retrieval for the LinkedIn Feed using Causal Language Models
    Sudarshan Srinivasa Ramanujam, Antonio Alonso, Saurabh Kataria, Siddharth Dangi, Akhilesh Gupta, Birjodh Singh Tiwana, Manas Somaiya, Luke Simon, David Byrne, Sojeong Ha, Sen Zhou, Andrei Akterskii, Zhanglong Liu, Samira Sriram, Zihan Xiong, Zhoutao Pei, Angela Shao, Alexander Li, Annie Xiao, Caitlin Kolb, Thomas Kistler, Zach Moore, Hamed Firooz
    Abstract
    Abstract:
    PDF Code
  • Agentic Orchestration for Adaptive Educational Recommendations A Multi-Agent LLM Framework for Personalized Learning Pathways
    Naina Chaturvedi, Ananda Gunawardena
    Abstract
    Abstract: Educational personalization represents a unique challenge for recommender systems: learners require not just content recommendations, but dynamic curriculum adaptation, real-time feedback, and proactive intervention strategies that evolve over extended timescales. We present a novel multi-agent architecture that treats educational personalization as an emergent property of specialized agent collaboration rather than a monolithic recommendation model. Our framework deploys 18+ coordinated agents organized in a four-tier hierarchy spanning perception, domain expertise, coordination, and strategic planning. Through deployment on a learning platform serving 6,000+ active users, we demonstrate that hierarchical agent orchestration enables recommendation capabilities unachievable by single-model approaches: parallel domain-specific analysis, temporal stratification from millisecond feedback to multi-month roadmap generation, and graceful degradation under partial failures. We present the architectural principles, coordination protocols, and preliminary evidence that agentic systems offer a promising paradigm for next-generation personalized learning systems. Our work contributes both a concrete implementation blueprint and theoretical foundations for applying multi-agent LLM orchestration to complex recommendation domains beyond education.
    PDF Code

Organizers

Narges Tabari

Narges Tabari
AWS AI Labs

Bio
Bio: Narges Tabari is an Applied Scientist working at AWS AI Labs. She received her PhD in 2018 in Computer Science at the University of North Carolina. She mainly wroks towards applications of NLP, from sentiment analysis, emotion detection, summarization, text generation, and intersection of NLP with recommender systems and personalization. Before joining Amazon, she was a Research Scientist at the University of Virginia and an NLP Engineer at Genentech. She has served as Session Chair for NAACL 2022 Industry Track, and has extensive experience reviewing for conferences such as NAACL, AAAI, and ACL.
Aniket Deshmukh

Aniket Deshmukh
Databricks

Bio
Bio: Aniket is an AI researcher at Databricks. Aniket was an Applied Scientist at AWS AI Labs, focusing on recommendation systems and large language models. Previously, as a Senior Applied Scientist at Microsoft AI and Research, he contributed to Microsoft Advertising by working on multimedia ads, smart campaigns, and auto-bidding projects. Aniket earned his PhD in Electrical and Computer Engineering from the University of Michigan, Ann Arbor, focusing on domain generalization and reinforcement learning. He is an active contributor to the academic community, regularly reviewing for conferences such as NeurIPS, ICML, CVPR, AISTATS, and JMLR, and has been recognized as a top reviewer at NeurIPS in 2021 and 2023, as well as AISTATS in 2022. Aniket has experience in organizing workshops at conferences like ICLR and WWW.
Wang-Cheng Kang

Wang-Cheng Kang
Google DeepMind

Bio
Bio: Dr. Wang-Cheng Kang is a Staff Research Engineer at Google DeepMind, working on LLM/GenAI for RecSys and LLM data efficiency. He held a PhD in Computer Science from UC San Diego, and interned at Adobe Research, Pinterest Labs, and Google Brain, focusing on recommender systems. He received RecSys’17 Best Paper Runner-up, and proposed SASRec, the first Transformer-based recommendation method.
Neil Shah

Neil Shah
Snap Research

Bio
Bio: Dr. Neil Shah is a Principal Scientist at Snapchat. His research focuses on large-scale user representation learning, recommender systems and efficient ML. His work has resulted in 70+ refereed publications at top data mining and machine learning venues. He has also served as an organizer across multiple venues including KDD, WSDM, SDM, ICWSM, ASONAM and more, and received multiple best paper awards (KDD, CHI), departmental rising star awards (NCSU), and outstanding service and reviewer awards (NeurIPS, WSDM).
Julian McAuley

Julian McAuley
University of California, San Diego

Bio
Bio: Julian McAuley has been a professor in the Computer Science Department at the University of California, San Diego since 2014. Previously he was a postdoctoral scholar at Stanford University after receiving his PhD from the Australian National University in 2011. His research is concerned with developing predictive models of human behavior using large volumes of online activity data. He has organized a large number of workshops, including workshops on recommendation, e-commerce, and natural language processing.
James Caverlee

James Caverlee
Texas A&M University

Bio
Bio: James Caverlee received his Ph.D. in Computer Science from Georgia Tech in 2007, co-advised by Ling Liu (CS) and Bill Rouse (ISYE). Before that, he earned two M.S. degrees from Stanford University: one in Computer Science in 2001 and one in Engineering-Economic Systems & Operations Research in 2000. His undergraduate degree is a B.A. in Economics (magna cum laude) from Duke University in 1996. James Caverlee joined the faculty at Texas A&M in 2007. He spent most of his sabbatical in 2015 at Google as a Visiting Scientist in Ed Chi’s group. He has been honored to receive an NSF CAREER award, DARPA Young Faculty award, a AFOSR Young Investigator award, as well as several teaching awards.
George Karypis

George Karypis
University of Minnesota

Bio
Bio: Dr. George Karypis is a Senior Principal Scientist at AWS AI and a Distinguished McKnight University Professor and William Norris Chair in Large Scale Computing at the Department of Computer Science & Engineering at the University of Minnesota. His research interests span the areas of data mining, machine learning, high performance computing, information retrieval, collaborative filtering, bioinformatics, cheminformatics, and scientific computing. He has coauthored over 350 papers and two books.

Program Committee

  • TBD Member 1 (TBD Organization)
  • TBD Member 2 (TBD Organization)
  • TBD Member 3 (TBD Organization)