Overview

Personalization is key in understanding user behavior and has been a main focus in the fields of knowledge discovery and information retrieval. Building personalized recommender systems is especially important now due to the vast amount of user-generated textual content, which offers deep insights into user preferences. The recent advancements in Large Language Models (LLMs) have significantly impacted research areas, mainly in Natural Language Processing and Knowledge Discovery, giving these models the ability to handle complex tasks and learn context.

However, the use of generative models and user-generated text for personalized systems and recommendation is relatively new and has shown some promising results. This workshop is designed to bridge the research gap in these fields and explore personalized applications and recommender systems. We aim to fully leverage generative models to develop AI systems that are not only accurate but also focused on meeting individual user needs. Building upon the momentum of previous successful forums, this workshop seeks to engage a diverse audience from academia and industry, fostering a dialogue that incorporates fresh insights from key stakeholders in the field.

Call for papers

We will welcome papers that leverage generative models with a goal of recommendation and personalization on several topics including but not limited to those mentioned in CFP. Papers can be submitted at OpenReview.

Information for the day of the workshop

Workshop at KDD2025

  • Submission deadline: May 8th, 2025 May 18th, 2025
  • Author notifications: June 8th, 2025
  • Meeting: August 4th, 2025

Schedule

Time (EDT) Agenda
1:00-1:10pm Opening remarks
1:10-1:50pm Keynote by Dr. Ed Chi (40 min)
1:50-2:30pm Keynote by Dr. Luna Dong (40 min)
2:30-3:00pm [Poster Session] (30 min)
3:00-3:30pm [Coffee Break + Poster Spillover] (30 min)
3:30-4:10pm Keynote by Dr. Dong Wang (40 min)
4:10-5:00pm Panel Discussion (60 min)
Panelists: Ed Chi, Dong Wang, Jundong Li, Neil Shah

Keynote Speakers

Ed Chi

Ed Chi

Google DeepMind
Title TBD

Abstract
Abstract: Abstract will be updated closer to the workshop date.
Bio
Bio: Ed H. Chi is a Distinguished Scientist at Google DeepMind, leading machine learning research teams working on large language models (from LaMDA leading to launching Bard/Gemini), and neural recommendation agents. With 39 patents and ~200 research articles, he is also known for research on user behavior in web and social media. As the Research Platform Lead, he helped launched Bard/Gemini, a conversational AI experiment, and delivered significant improvements for YouTube, News, Ads, Google Play Store at Google with >660 product improvements since 2013. Prior to Google, he was Area Manager and Principal Scientist at Xerox Palo Alto Research Center’s Augmented Social Cognition Group in researching how social computing systems help groups of people to remember, think and reason. Ed earned his 3 degrees (B.S., M.S., and Ph.D.) in 6.5 years from University of Minnesota. Inducted as an ACM Fellow and into the CHI Academy, he also received a 20-year Test of Time award for research in information visualization. He has been featured and quoted in the press, including the Economist, Time Magazine, LA Times, and the Associated Press. An avid golfer, swimmer, photographer and snowboarder in his spare time, he also has a blackbelt in Taekwondo.
                                                                                                                                                                                               
Luna Dong

Luna Dong

Meta
From Sight to Insight: Visual Memory for Smarter Assistants

Abstract
Abstract: Imagine a personal assistant that, with user permission, persistently remembers moments from daily life—answering questions like “When and where did I see this lady?” or offering personalized suggestions like “You might enjoy The Little Prince—it relates to the statue you liked in Lyon.” Realizing this vision requires overcoming major challenges: capturing visual memories under hardware constraints (e.g., memory, battery, thermal limits, bandwidth), extracting meaningful personalization signals from noisy, task-agnostic visual histories, and supporting real-time question answering and recommendations under tight latency requirements. In this talk, we present our early work toward this goal. Pensieve, our memory-based QA system, improves accuracy by 11% over state-of-the-art multimodal RAG baselines. VisualLens infers user interests from casual photos, outperforming leading recommendation systems by 5–10%. We also share initial results on efficient, event-triggered memory capture and compression. Our work points to a broad landscape of research opportunities in building richer, more context-aware personal assistants capable of learning from and reasoning over users’ visual experiences.
Bio
Bio: Xin Luna Dong is a Principal Scientist at Meta Reality Labs, leading the ML efforts in building intelligent personal assistant for wearables devices. Before that, she has spent more than a decade building knowledge graphs, such as the Amazon Product Graph and the Google Knowledge Graph. She has co-authored books “Machine Knowledge: Creation and Curation of Comprehensive Knowledge Bases” and “Big Data Integration”. She was named an ACM Fellow and an IEEE Fellow for “significant contributions to knowledge graph construction and data integration”, awarded the VLDB Women in Database Research Award and VLDB Early Career Research Contribution Award, and invited as an ACM Distinguished Speaker. She serves in the PVLDB advisory committee, was a member of the VLDB endowment, a PC co-chair for KDD’2022 ADS track, WSDM’2022, VLDB’2021, and Sigmod’2018.
                                                                                                                                                                                               
Dong Wang

Dong Wang

University of Illinois Urbana-Champaign
Harnessing Generative AI for Efficient Multimodal Recommender Systems and Privacy-preserving Personalized Image Generation

Abstract
Abstract: Generative AI is reshaping research and industry across domains, from recommender systems and computer vision to healthcare and education, by enabling powerful new models like large language models (LLM), diffusion networks, and multimodal architectures. These advances also introduce critical challenges in efficiency, modality integration, privacy, and system robustness. This talk shares two strands of our recent work that harness generative AI for recommender systems and personalized image generation. The first strand introduces PRIME, a preference-optimized retrieval and ranking framework for efficient multimodal recommendation. PRIME pioneers the use of LLM-based ranking feedback to iteratively refine the retriever via online preference optimization. The second strand examines the Anti- Tamper Perturbation (ATP) scheme for privacy-aware, personalized image generation: a unified approach that embeds protection and authorization perturbations to safeguard individual images against forgery attacks. The talk concludes with several directions for future research in harnessing generative AI for personalized and trustworthy systems.
Bio
Bio: Dong Wang is an associate professor in School of Information Sciences at the University of Illinois at Urbana Champaign (UIUC). His research interests lie in the area of social sensing, intelligence and computing, human-centered AI and big data analytics. Dong Wang has published over 200 technical papers in peer reviewed conferences and journals. His research on social sensing, intelligence and computing resulted in software tools that found applications in academia, industry, and government research labs. He also authored three books: “Social Intelligence” published by Springer in 2025, “Social Edge Computing” published by Springer in 2023, and “Social Sensing” published by Elsevier in 2015. He is the recipient of NSF CAREER Award, Google Faculty Research Award, ARO Young Investigator Program (YIP), the Best Paper Award of 2022 ACM/IEEE International Conference on Advances in Social Networks Analysis and Mining (ASONAM), the Best Paper Award of 16th IEEE Real-Time and Embedded Technology and Applications Symposium (RTAS) and the Best Paper Honorable Mention of 2025 ACM CHI and 2022 IEEE SmartComp. He serves as an associate editor of IEEE Transactions on Big Data, Frontiers in Big Data, and Social Network and Analysis Journal (SNAM). He is also an IEEE Senior Member and ACM and AAAI Member. Dr. Wang’s website: https://www.wangdong.org.
                                                                                                                                                                                               

Panelists

Ed Chi

Ed Chi
Google DeepMind

Bio
Bio: Ed H. Chi is a Distinguished Scientist at Google DeepMind, leading machine learning research teams working on large language models (LaMDA/Bard), neural recommendations, and reliable machine learning. With 39 patents and ~200 research articles, he is also known for research on user behavior in web and social media. As the Research Platform Lead, he helped launched Bard, a conversational AI experiment, and delivered significant improvements for YouTube, News, Ads, Google Play Store at Google with >660 product improvements since 2013. Prior to Google, he was Area Manager and Principal Scientist at Xerox Palo Alto Research Center’s Augmented Social Cognition Group in researching how social computing systems help groups of people to remember, think and reason. Ed earned his 3 degrees (B.S., M.S., and Ph.D.) in 6.5 years from University of Minnesota. Inducted as an ACM Fellow and into the CHI Academy, he also received a 20-year Test of Time award for research in information visualization. He has been featured and quoted in the press, including the Economist, Time Magazine, LA Times, and the Associated Press. An avid golfer, swimmer, photographer and snowboarder in his spare time, he also has a blackbelt in Taekwondo
Dong Wang

Dong Wang
University of Illinois Urbana-Champaign

Bio
Bio: Dong Wang is an associate professor in School of Information Sciences at the University of Illinois at Urbana Champaign (UIUC). His research interests lie in the area of social sensing, intelligence and computing, human-centered AI and big data analytics. Dong Wang has published over 200 technical papers in peer reviewed conferences and journals. His research on social sensing, intelligence and computing resulted in software tools that found applications in academia, industry, and government research labs. He also authored three books: “Social Intelligence” published by Springer in 2025, “Social Edge Computing” published by Springer in 2023, and “Social Sensing” published by Elsevier in 2015. He is the recipient of NSF CAREER Award, Google Faculty Research Award, ARO Young Investigator Program (YIP), the Best Paper Award of 2022 ACM/IEEE International Conference on Advances in Social Networks Analysis and Mining (ASONAM), the Best Paper Award of 16th IEEE Real-Time and Embedded Technology and Applications Symposium (RTAS) and the Best Paper Honorable Mention of 2025 ACM CHI and 2022 IEEE SmartComp. He serves as an associate editor of IEEE Transactions on Big Data, Frontiers in Big Data, and Social Network and Analysis Journal (SNAM). He is also an IEEE Senior Member and ACM and AAAI Member. Dr. Wang’s website: https://www.wangdong.org.
Jundong Li

Jundong Li
University of Virginia

Bio
Bio: Jundong Li is an Associate Professor at the University of Virginia with appointments in the Department of Electrical and Computer Engineering and the Department of Computer Science. Since the summer of 2022, he has also been a part-time LinkedIn Research Scholar. Prior to joining UVA, he received his Ph.D. degree in Computer Science at Arizona State University in 2019 under the supervision of Dr. Huan Liu, M.Sc. degree in Computer Science at University of Alberta in 2014, and B.Eng. degree in Software Engineering at Zhejiang University in 2012. His research interests are generally in data mining, machine learning, and AI in general, with a focus on graph machine learning, trustworthy/safe machine learning, and large language models. He has published over 150 papers in high-impact venues, with over 15,000 citation count. He has won several prestigious awards, including SIGKDD Rising Star Award (2024), PAKDD Best Paper Award (2024), PAKDD Early Career Research Award (2023), NSF CAREER Award (2022), SIGKDD Best Research Paper Award (2022), JP Morgan Chase Faculty Research Award (2021 & 2022), and Cisco Faculty Research Award (2021), among others. His group’s research is generously supported by NSF (CAREER, III, SaTC, SAI, S&CC), DOE, ONR, Commonwealth Cyber Initiative, Jefferson Lab, JP Morgan, Cisco, Netflix, and Snap.
Neil Shah

Neil Shah
Snap Research

Bio
Bio: Dr. Neil Shah is a Principal Scientist at Snapchat. His research focuses on large-scale user representation learning, recommender systems and efficient ML. His work has resulted in 70+ refereed publications at top data mining and machine learning venues. He has also served as an organizer across multiple venues including KDD, WSDM, SDM, ICWSM, ASONAM and more, and received multiple best paper awards (KDD, CHI), departmental rising star awards (NCSU), and outstanding service and reviewer awards (NeurIPS, WSDM). He has also served as an organizer across multiple workshops and tutorials at KDD, AAAI, ICDM, CIKM and more.

Accepted Papers

  • Best Paper Award LLM-based Conversational Recommendation Agents with Collaborative Verbalized Experience
    Yaochen Zhu, Harald Steck, Dawen Liang, Yinhan He, Nathan Kallus, Jundong Li
    Abstract
    Abstract: Large language models (LLM) have demonstrated impressive zero- shot capabilities in conversational recommender systems (CRS). However, effectively utilizing historical conversations remains a significant challenge. Current approaches either retrieve few-shot examples or extract global rules to augment the prompt for LLM- based CRSs, which fail to capture the implicit and preference- oriented knowledge. To address the above challenge, we propose LLM-based Conversational Recommendation Agents with Collab- orative Verbalized Experience (CRAVE). CRAVE starts by sampling trajectories of LLM-based CRS agents on historical queries and establishing verbalized experience banks by reflecting the agents’ actions on user feedback. Additionally, we introduce a collaborative retriever network finetuned with content-parameterized multino- mial likelihood on query-items pairs to retrieve preference-oriented verbal experiences for new queries. Furthermore, we developed a debater-critic agent (DCA) system where each agent maintains an independent collaborative experience bank and works together to enhance the CRS recommendations. We demonstrate that the open- ended debate and critique nature of DCA benefits significantly from the collaborative experience augmentation with CRAVE.
    PDF Code
  • C-TLSAN Content-Enhanced Time-Aware Long- and Short-Term Attention Network for Personalized Recommendation
    Siqi Liang, Yudi Zhang, Yubo Wang
    Abstract
    Abstract: Sequential recommender systems aim to model users’ evolving preferences by capturing patterns in their historical interactions. Recent advances in this area have leveraged deep neural networks and attention mechanisms to effectively represent sequential behav- iors and time-sensitive interests. In this work, we propose C-TLSAN (Content-Enhanced Time-Aware Long- and Short-Term Attention Network), an extension of the TLSAN architecture that jointly models long- and short-term user preferences while incorporating semantic content associated with items—such as product descrip- tions. C-TLSAN enriches the recommendation pipeline by embedding textual content linked to users’ historical interactions directly into both long-term and short-term attention layers. This allows the model to learn from both behavioral patterns and rich item content, enhancing user and item representations across temporal dimen- sions. By fusing sequential signals with textual semantics, our ap- proach improves the expressiveness and personalization capacity of recommendation systems. We conduct extensive experiments on large-scale Amazon datasets, benchmarking C-TLSAN against state-of-the-art baselines, includ- ing recent sequential recommenders based on Large Language Mod- els (LLMs), which represent interaction history and predictions in text form. Empirical results demonstrate that C-TLSAN consistently outperforms strong baselines in next-item prediction tasks. Notably, it improves AUC by 1.66%, Recall@10 by 93.99%, and Precision@10 by 94.80% on average over the best-performing baseline (TLSAN) across 10 Amazon product categories. These results highlight the value of integrating content-aware enhancements into temporal modeling frameworks for sequential recommendation.
    PDF Code
  • Not Just What, But When Integrating Irregular Intervals to LLM for Sequential Recommendation
    Wei-Wei Du, Takuma Udagawa, Kei Tateno
    Abstract
    Abstract: Time intervals between purchasing items are a crucial factor in se- quential recommendation tasks, whereas existing approaches focus on item sequences and often overlook by assuming the intervals between items are static. However, dynamic intervals serve as a dimension that describes user profiling on not only the history within a user but also different users with the same item history. In this work, we propose IntervalLLM, a novel framework that inte- grates interval information into LLM and incorporates the novel interval-infused attention to jointly consider information of items and intervals. Furthermore, unlike prior studies that address the cold-start scenario only from the perspectives of users and items, we introduce a new viewpoint the interval perspective to serve as an additional metric for evaluating recommendation methods on the warm and cold scenarios. Extensive experiments on 3 benchmarks with both traditional- and LLM-based baselines demonstrate that our IntervalLLM achieves not only 4.4% improvements in average but also the best-performing warm and cold scenarios across all users, items, and the proposed interval perspectives. In addition, we observe that the cold scenario from the interval perspective experi- ences the most significant performance drop among all recommen- dation methods. This finding underscores the necessity of further research on interval-based cold challenges and our integration of in- terval information in the realm of sequential recommendation tasks.
    PDF Code
  • LLM-Enhanced Reranking for Complementary Product Recommendation
    Zekun Xu, Yudi Zhang
    Abstract
    Abstract: Complementary product recommendation, which aims to suggest items that are used together to enhance customer value, is a crucial yet challenging task in e-commerce. While existing graph neural network (GNN) approaches have made significant progress in cap- turing complex product relationships, they often struggle with the accuracy-diversity tradeoff, particularly for long-tail items. This paper introduces a model-agnostic approach that leverages Large Language Models (LLMs) to enhance the reranking of complemen- tary product recommendations. Unlike previous works that use LLMs primarily for data preprocessing and graph augmentation, our method applies LLM-based prompting strategies directly to rerank candidate items retrieved from existing recommendation models, eliminating the need for model retraining. Through ex- tensive experiments on public datasets, we demonstrate that our approach effectively balances accuracy and diversity in complemen- tary product recommendations, with at least 50% lift in accuracy metrics and 2% lift in diversity metrics on average for the top rec- ommended items across datasets
    PDF Code
  • Dynamic Context-Aware Prompt Recommendation for Domain-Specific AI Applications
    Xinye Tang, Haijun Zhai, Chaitanya Belwal, Vineeth Thayanithi, Philip Baumann, Yogesh K Roy
    Abstract
    Abstract: LLM-powered applications are highly susceptible to the quality of user prompts, and crafting high-quality prompts can often be challenging especially for domain-specific applications. This pa- per presents a novel dynamic context-aware prompt recommen- dation system for domain-specific AI applications. Our solution combines contextual query analysis, retrieval-augmented knowl- edge grounding, hierarchical skill organization, and adaptive skill ranking to generate relevant and actionable prompt suggestions. The system leverages behavioral telemetry and a two-stage hierar- chical reasoning process to dynamically select and rank relevant skills, and synthesizes prompts using both predefined and adap- tive templates enhanced with few-shot learning. Experiments on real-world datasets demonstrate that our approach achieves high usefulness and relevance, as validated by both automated and expert evaluations.
    PDF Code
  • Robustness of LLM-Initialized Bandits for Recommendation Under Noisy Priors
    Adam Bayley, Kevin H. Wilson, Yanshuai Cao, Raquel Aoki, Xiaodan Zhu
    Abstract
    Abstract: Contextual bandits have proven effective for building personalized recommender systems, yet they suffer from the cold-start prob- lem when little user interaction data is available. Recent work has shown that Large Language Models (LLMs) can help address this by simulating user preferences to warm-start bandits—a method known as Contextual Bandits with LLM Initialization (CBLI). While CBLI reduces early regret, it is unclear how robust the approach is to inaccuracies in LLM-generated preferences. In this paper, we ex- tend the CBLI framework to systematically evaluate its sensitivity to noisy LLM priors. We inject both random and label-flipping noise into the synthetic training data and measure how these affect cu- mulative regret across three tasks generated from conjoint-survey datasets. Our results show that CBLI is robust to random corruption but exhibits clear breakdown thresholds under preference-flipping warm-starting remains effective up to 30% corruption, loses its ad- vantage around 40%, and degrades performance beyond 50%. We further observe diminishing returns with larger synthetic datasets beyond a point, more data can reinforce bias rather than improve performance under noisy conditions. These findings offer practical insights for deploying LLM-assisted decision systems in real-world recommendation scenarios
    PDF Code
  • Towards Large-scale Generative Ranking
    Yanhua Huang, Yuqi Chen, Xiong Cao, Rui Yang, Mingliang Qi, Yinghao Zhu, Qingchang Han, Yaowei Liu, Zhaoyu Liu, Xuefeng Yao, Yuting Jia, Leilei Ma, Yinqi Zhang,Taoyu Zhu, Liujie Zhang, Lei Chen, Weihang Chen, Min Zhu, Ruiwen Xu, Lei Zhang
    Abstract
    Abstract: Generative recommendation has recently emerged as a promising paradigm in information retrieval. However, generative ranking systems are still understudied, particularly with respect to their effectiveness and feasibility in large-scale industrial settings. This paper investigates this topic at the ranking stage of Xiaohongshu’s Explore Feed, a recommender system that serves hundreds of mil- lions of users. Specifically, we first examine how generative ranking outperforms current industrial recommenders. Through theoretical and empirical analyses, we find that the primary improvement in ef- fectiveness stems from the generative architecture, rather than the training paradigm. To facilitate efficient deployment of generative ranking, we introduce GenRank, a novel generative architecture for ranking. We validate the effectiveness and efficiency of our solution through online A/B experiments. The results show that GenRank achieves significant improvements in user satisfaction with nearly equivalent computational resources compared to the existing production system
    PDF Code
  • Enhancing Text Classification with a Novel Multi-Agent Collaboration Framework Leveraging BERT
    Hediyeh Baban, Sai Abhishek Pidaparthi, Sichen Lu, Aashutosh Nema, Samaksh Gulati
    Abstract
    Abstract: We present a multi-agent collaboration framework that enhances text classification by dynamically routing low-confidence BERT pre- dictions to specialized agents—Lexical, Contextual, Logic, Consen- sus, and Explainability. This escalation mechanism enables deeper analysis and consensus-driven decisions. Across four benchmark datasets, our system improves classification accuracy by up to 5.5% over standard BERT, offering a scalable and interpretable solution for robust NLP.
    PDF Code
  • Optimizing Retrieval-Augmented Generation with Multi-Agent Hybrid Retrieval
    Hediyeh Baban, Sai Abhishek Pidaparthi, Samaksh Gulati, Aashutosh Nema
    Abstract
    Abstract: With the rapid growth of digital content and scientific literature, efficient information retrieval is increasingly vital for research au- tomation, document management, and question answering. Tradi- tional retrieval methods like BM25 and embedding-based search, though effective individually, often fall short on complex queries. We propose an Agentic AI workflow for Retrieval-Augmented Generation (RAG), integrating hybrid retrieval with multi-agent collaboration. Our system combines BM25 and semantic search, ensembles results via weighted cosine similarity, and applies con- textual reordering using large language models. The workflow is powered by LangGraph, a multi-agent frame- work enabling dynamic agent coordination for document ranking and filtering. Experiments show a 4×reduction in retrieval latency (43s to 11s) and a 7% improvement in relevance accuracy. We also analyze weight sensitivity and discuss scal
    PDF Code
  • End-to-End Personalization Unifying Recommender Systems with Large Language Models
    Danial Ebrat, Tina Aminian, Sepideh Ahmadian, Luis Rueda
    Abstract
    Abstract: Recommender systems are essential for guiding users through the vast and diverse landscape of digital content by delivering person- alized and relevant suggestions. However, improving both person- alization and interpretability remains a challenge, particularly in scenarios involving limited user feedback or heterogeneous item at- tributes. In this article, we propose a novel hybrid recommendation framework that combines Graph Attention Networks (GATs) with Large Language Models (LLMs) to address these limitations. LLMs are first used to enrich user and item representations by generating semantically meaningful profiles based on metadata such as titles, genres, and overviews. These enriched embeddings serve as initial node features in a user–movie bipartite graph, which is processed using a GAT-based collaborative filtering model. To enhance rank- ing accuracy, we introduce a hybrid loss function that combines Bayesian Personalized Ranking (BPR), cosine similarity, and robust negative sampling. Post-processing involves reranking the GAT- generated recommendations using the LLM, which also generates natural-language justifications to improve transparency. We evalu- ate our model on benchmark datasets, including MovieLens 100k and 1M, where it consistently outperforms strong baselines. Abla- tion studies confirm that LLM-based embeddings and the cosine similarity term significantly contribute to performance gains. This work demonstrates the potential of integrating LLMs to improve both the accuracy and interpretability of recommender systems.
    PDF Code
  • PromptShield: A Hybrid Framework for Copyright-Safe Text-to-Image Generation
    Shreya Garg
    Abstract
    Abstract: Text-to-image diffusion models are increasingly used in commercial creative workflows, including automated design generation for gift cards. However, these models—trained on large- scale web data—are prone to unintentionally generating content that infringes on copyrighted or trademarked material, particularly in the form of stylistic mimicry or semantic similarity to known intellectual property (IP). We propose PromptShield, a hybrid, dataset-free framework for proactively mitigating copyright risk in generative pipelines. PromptShield integrates three lightweight components: (1) zero-shot sentence transformer-based prompt filtering to flag high-risk queries, (2) prompt rewriting using large language models (LLMs) to preserve creative intent while removing IP cues, and (3) style regularization at image generation time using negative prompting and classifier-free guidance. Applied to the domain of Amazon Gift Card design, PromptShield achieves a 92% reduction in IP-risky generations without degrading image quality or prompt-image alignment. Our method enables scalable, safe design generation
    PDF Code

Organizers

Narges Tabari

Narges Tabari
AWS AI Labs

Bio
Bio: Narges Tabari is an Applied Scientist working at AWS AI Labs. She received her PhD in 2018 in Computer Science at the University of North Carolina. She mainly wroks towards applications of NLP, from sentiment analysis, emotion detection, summarization, text generation, and intersection of NLP with recommender systems and personalization. Before joining Amazon, she was a Research Scientist at the University of Virginia and an NLP Engineer at Genentech. She has served as Session Chair for NAACL 2022 Industry Track, and has extensive experience reviewing for conferences such as NAACL, AAAI, and ACL.
Aniket Deshmukh

Aniket Deshmukh
AWS AI Labs

Bio
Bio: Aniket is an Applied Scientist at AWS AI Labs, focusing on recommendation systems and large language models. Previously, as a Senior Applied Scientist at Microsoft AI and Research, he contributed to Microsoft Advertising by working on multimedia ads, smart campaigns, and auto-bidding projects. Aniket earned his PhD in Electrical and Computer Engineering from the University of Michigan, Ann Arbor, focusing on domain generalization and reinforcement learning. He is an active contributor to the academic community, regularly reviewing for conferences such as NeurIPS, ICML, CVPR, AISTATS, and JMLR, and has been recognized as a top reviewer at NeurIPS in 2021 and 2023, as well as AISTATS in 2022. Aniket has experience in organizing workshops at conferences like ICLR and WWW.
Wang-Cheng Kang

Wang-Cheng Kang
Google DeepMind

Bio
Bio: Dr. Wang-Cheng Kang is a Staff Research Engineer at Google DeepMind, working on LLM/GenAI for RecSys and LLM data efficiency. He held a PhD in Computer Science from UC San Diego, and interned at Adobe Research, Pinterest Labs, and Google Brain, focusing on recommender systems. He received RecSys’17 Best Paper Runner-up, and proposed SASRec, the first Transformer-based recommendation method.
Neil Shah

Neil Shah
Snap Research

Bio
Bio: Dr. Neil Shah is a Principal Scientist at Snapchat. His research focuses on large-scale user representation learning, recommender systems and efficient ML. His work has resulted in 70+ refereed publications at top data mining and machine learning venues. He has also served as an organizer across multiple venues including KDD, WSDM, SDM, ICWSM, ASONAM and more, and received multiple best paper awards (KDD, CHI), departmental rising star awards (NCSU), and outstanding service and reviewer awards (NeurIPS, WSDM). He has also served as an organizer across multiple workshops and tutorials at KDD, AAAI, ICDM, CIKM and more.
Julian McAuley

Julian McAuley
University of California, San Diego

Bio
Bio: Julian McAuley has been a professor in the Computer Science Department at the University of California, San Diego since 2014. Previously he was a postdoctoral scholar at Stanford University after receiving his PhD from the Australian National University in 2011. His research is concerned with developing predictive models of human behavior using large volumes of online activity data. He has organized a large number of workshops, including workshops on recommendation, e-commerce, and natural language processing.
James Caverlee

James Caverlee
Texas A&M University

Bio
Bio: James Caverlee received his Ph.D. in Computer Science from Georgia Tech in 2007, co-advised by Ling Liu (CS) and Bill Rouse (ISYE). Before that, he earned two M.S. degrees from Stanford University: one in Computer Science in 2001 and one in Engineering-Economic Systems & Operations Research in 2000. His undergraduate degree is a B.A. in Economics (magna cum laude) from Duke University in 1996. James Caverlee joined the faculty at Texas A&M in 2007. He spent most of his sabbatical in 2015 at Google as a Visiting Scientist in Ed Chi’s group. He has been honored to receive an NSF CAREER award, DARPA Young Faculty award, a AFOSR Young Investigator award, as well as several teaching awards.
George Karypis

George Karypis
University of Minnesota

Bio
Bio: Dr. George Karypis is a Senior Principal Scientist at AWS AI and a Distinguished McKnight University Professor and William Norris Chair in Large Scale Computing at the Department of Computer Science & Engineering at the University of Minnesota. His research interests span the areas of data mining, machine learning, high performance computing, information retrieval, collaborative filtering, bioinformatics, cheminformatics, and scientific computing. His research has resulted in the development of software libraries for serial and parallel graph partitioning (METIS and ParMETIS), hypergraph partitioning (hMETIS), for parallel Cholesky factorization (PSPASES), for collaborative filtering-based recommendation algorithms (SUGGEST), clustering high dimensional datasets (CLUTO), finding frequent patterns in diverse datasets (PAFI), and for protein secondary structure prediction (YASSPP). He has coauthored over 350 papers on these topics and two books (“Introduction to Protein Structure Prediction: Methods and Algorithms” (Wiley, 2010) and “Introduction to Parallel Computing” (Publ. Addison Wesley, 2003, 2nd edition)). He is serving on the program committees of many conferences and workshops on these topics, and on the editorial boards of the IEEE Transactions on Knowledge and Data Engineering, ACM Transactions on Knowledge Discovery from Data, Data Mining and Knowledge Discovery, Social Network Analysis and Data Mining Journal, International Journal of Data Mining and Bioinformatics, the journal on Current Proteomics, Advances in Bioinformatics, and Biomedicine and Biotechnology. He is a Fellow of the IEEE.