Overview

Personalization is key in understanding user behavior and has been a main focus in the fields of knowledge discovery and information retrieval. Building personalized recommender systems is especially important now due to the vast amount of user-generated textual content, which offers deep insights into user preferences. The recent advancements in Large Language Models (LLMs) have significantly impacted research areas, mainly in Natural Language Processing and Knowledge Discovery, giving these models the ability to handle complex tasks and learn context.

However, the use of generative models and user-generated text for personalized systems and recommendation is relatively new and has shown some promising results. This workshop is designed to bridge the research gap in these fields and explore personalized applications and recommender systems. We aim to fully leverage generative models to develop AI systems that are not only accurate but also focused on meeting individual user needs. Building upon the momentum of previous successful forums, this workshop seeks to engage a diverse audience from academia and industry, fostering a dialogue that incorporates fresh insights from key stakeholders in the field.

Call for papers

Deadline extended to 10 June 2024. We will welcome papers that leverage generative models with a goal of recommendation and personalization on several topics including but not limited to those mentioned in CFP. Papers can be submitted at OpenReview.

Information for the day of the workshop

Workshop at KDD2024

  • Submission deadline: 28 May 2024 10 June 2024
  • Author notifications: 28 June 2024
  • Meeting: 25/26 August 2024

Schedule

Time (PDT) Agenda
2:00-2:10pm Opening remarks
2:10-2:50pm Keynote by Dietmar Jannach (40 min)
2:50-3:30pm Coffee Break/Poster Session
3:30-4:10pm Keynote by Xiao-Ming (40 min)
4:15-4:55pm Keynote by Ed Chi (40 min)
5:00-6:00pm Panel Discussion (60 min)
Panelists: Ed Chi, James Caverlee, Huzefa Rangwala, Xiao-Ming Wu

Keynote Speakers

Ed Chi

Ed Chi

Google DeepMind
The Future of Discovery Assistance

Abstract
Abstract: We’ve moved way beyond the old days of building recommendation systems using traditional ML and pattern recognition techniques. The future of universal personal assistance for discovery and learning is upon us. How will multimodality image, video, and audio understanding, and reasoning abilities of large foundation models change how we build these systems? I will shed some initial lights on this topic by discussing 3 trends: First, the move to a single multimodal large model with reasoning abilities; Second, the fundamental research on personalization and user alignment; Third, the combination of System 1 and System 2 cognitive abilities into a single universal assistant.
Bio
Bio: Ed H. Chi is a Distinguished Scientist at Google DeepMind, leading machine learning research teams working on large language models (from LaMDA leading to launching Bard/Gemini), and neural recommendation agents. With 39 patents and ~200 research articles, he is also known for research on user behavior in web and social media. As the Research Platform Lead, he helped launched Bard/Gemini, a conversational AI experiment, and delivered significant improvements for YouTube, News, Ads, Google Play Store at Google with >660 product improvements since 2013. Prior to Google, he was Area Manager and Principal Scientist at Xerox Palo Alto Research Center’s Augmented Social Cognition Group in researching how social computing systems help groups of people to remember, think and reason. Ed earned his 3 degrees (B.S., M.S., and Ph.D.) in 6.5 years from University of Minnesota. Inducted as an ACM Fellow and into the CHI Academy, he also received a 20-year Test of Time award for research in information visualization. He has been featured and quoted in the press, including the Economist, Time Magazine, LA Times, and the Associated Press. An avid golfer, swimmer, photographer and snowboarder in his spare time, he also has a blackbelt in Taekwondo.
                                                                                                                                                                                               
Dietmar Jannach

Dietmar Jannach

University of Klagenfurt, University of Bergen
Title of the talk: Leveraging Large Language Models for Recommender Systems

Abstract
Abstract: Over the last two years, Large Language Models (LLMs) and AI assistants like ChatGPT have significantly impacted various research domains. Recommender systems are no exception, and a multitude of research works have been published recently that leverage the power of LLMs and other forms of generative AI for the development of next-generation recommender systems. In this talk, we will first review the latest survey works that aim to structure existing approaches to building LLM-based recommender systems. Then, we will discuss selected topics in more depth, including the use of LLMs for sequential recommendation problems. Furthermore, we will touch on questions regarding the evaluation of interactive LLM-generated recommendations and outline future directions toward multi-modal LLM-based recommendations.
Bio
Bio: Dietmar Jannach is a Professor at the University of Klagenfurt in Austria. He has authored more than 150 publications in areas including recommender systems technology, knowledge-based systems development, constraint-based systems, semantic web applications and web mining, and software engineering. Among his publications, Jannach is a co-author of the book Recommender Systems: An Introduction. His current line of research is focused on the design and evaluation of machine learning algorithms for recommender systems and on the impact and value of recommender systems in practice.
                                                                                                                                                                                               
Xiao-Ming Wu

Xiao-Ming Wu

The Hong Kong Polytechnic University
Title of the talk: Advancing Next-Generation Recommendation Models with Large Language Models

Abstract
Abstract: In the rapidly evolving landscape of artificial intelligence, large language models (LLMs) have emerged as powerful tools for enhancing recommendation systems. This talk will explore the transformative potential of LLMs in developing advanced next-generation recommendation models. We will delve into state-of-the-art technologies for integrating LLMs into recommendation systems, highlighting our recent work on leveraging LLMs for content enhancement, semantic tokenization, and generative recommendation.
Bio
Bio: Dr Wu received her PhD in Electrical Engineering from Columbia University with her dissertation titled “Learning on Graphs with Partially Absorbing Random Walks: Theory and Practice”. She received her bachelor degree in Applied Mathematics and master degree in Computer Science, both from Peking University, and her MPhil degree in Information Engineering from the Chinese University of Hong Kong. Her research interests are in the broad areas of machine learning, pattern recognition, and data mining, with a particular focus on graph algorithms and their applications. She has strong interests in both fundamental and applied research, and regularly publishes in prestigious venues such as NIPS and CVPR. Her thesis research has contributed to theoretical understanding of state-of-the-art methods and improved upon them significantly for various applications. Her approaches have been currently adopted in industry.
                                                                                                                                                                                               

Panelists

Ed Chi

Ed Chi
Google DeepMind

Bio
Bio: Ed H. Chi is a Distinguished Scientist at Google DeepMind, leading machine learning research teams working on large language models (LaMDA/Bard), neural recommendations, and reliable machine learning. With 39 patents and ~200 research articles, he is also known for research on user behavior in web and social media. As the Research Platform Lead, he helped launched Bard, a conversational AI experiment, and delivered significant improvements for YouTube, News, Ads, Google Play Store at Google with >660 product improvements since 2013. Prior to Google, he was Area Manager and Principal Scientist at Xerox Palo Alto Research Center’s Augmented Social Cognition Group in researching how social computing systems help groups of people to remember, think and reason. Ed earned his 3 degrees (B.S., M.S., and Ph.D.) in 6.5 years from University of Minnesota. Inducted as an ACM Fellow and into the CHI Academy, he also received a 20-year Test of Time award for research in information visualization. He has been featured and quoted in the press, including the Economist, Time Magazine, LA Times, and the Associated Press. An avid golfer, swimmer, photographer and snowboarder in his spare time, he also has a blackbelt in Taekwondo
Xiao-Ming Wu

Xiao-Ming Wu
The Hong Kong Polytechnic University

Bio
Bio: Dr Wu received her PhD in Electrical Engineering from Columbia University with her dissertation titled “Learning on Graphs with Partially Absorbing Random Walks: Theory and Practice”. She received her bachelor degree in Applied Mathematics and master degree in Computer Science, both from Peking University, and her MPhil degree in Information Engineering from the Chinese University of Hong Kong. Her research interests are in the broad areas of machine learning, pattern recognition, and data mining, with a particular focus on graph algorithms and their applications. She has strong interests in both fundamental and applied research, and regularly publishes in prestigious venues such as NIPS and CVPR. Her thesis research has contributed to theoretical understanding of state-of-the-art methods and improved upon them significantly for various applications. Her approaches have been currently adopted in industry.
James Caverlee

James Caverlee
Texas A&M University

Bio
Bio: James Caverlee received his Ph.D. in Computer Science from Georgia Tech in 2007, co-advised by Ling Liu (CS) and Bill Rouse (ISYE). Before that, he earned two M.S. degrees from Stanford University: one in Computer Science in 2001 and one in Engineering-Economic Systems & Operations Research in 2000. His undergraduate degree is a B.A. in Economics (magna cum laude) from Duke University in 1996. James Caverlee joined the faculty at Texas A&M in 2007. He spent most of his sabbatical in 2015 at Google as a Visiting Scientist in Ed Chi’s group. He has been honored to receive an NSF CAREER award, DARPA Young Faculty award, a AFOSR Young Investigator award, as well as several teaching awards.
Yashar Deldjoo

Yashar Deldjoo
Polytechnic University of Bari, Italy

Bio
Bio: Yashar Deldjoo is a “senior research scientist” in computer science, currently a tenure-track Assistant Professor at the Polytechnic University of Bari, Italy. He serves as a Senior PC for top conferences like SIGIR, CIKM, ECAI and WebConf, and is an outstanding reviewer for ACM Transactions on Recommender Systems (TORS) and an associate editor for ACM Computing Surveys (CSUR). He earned his Ph.D. with distinction (cum laude) in computer science from Politecnico di Milano and his M.Sc. in electrical engineering from Chalmers University of Technology.
Huzefa Rangwala

Huzefa Rangwala
AWS AI Labs

Bio
Bio: At AWS AI/ML, Huzefa Rangawala spearheads a team of scientists and engineers, revolutionizing AWS services through advancements in graph machine learning, reinforcement learning, AutoML, low-code/no-code generative AI, and personalized AI solutions. His passion extends to transforming analytical sciences with the power of generative AI. He is a Professor of Computer Science and the Lawrence Cranberg Faculty Fellow at George Mason University, where he also served as interim Chair from 2019-2020. He is the recipient of the National Science Foundation (NSF) Career Award, the 2014 University-wide Teaching Award, Emerging Researcher/Creator/Scholar Award, the 2018 Undergraduate Research Mentor Award. In 2022, Huzefa co-chaired the ACM SIGKDD conference in Washington, DC. His research interests include structured learning, federated learning, and ML fairness inter-twinned with applying ML to problems in biology, biomedical engineering, and learning sciences.

Accepted Papers

  • SUBER An RL Environment with Simulated Human Behavior for Recommender Systems
    Nathan Corecco, Giorgio Piatti, Luca A Lanzendörfer, Flint Xiaofeng Fan, Roger Wattenhofer
    Abstract
    Abstract: Reinforcement learning (RL) has gained popularity in the realm of recommender systems due to its ability to optimize long-term rewards and guide users in discovering relevant content. However, the successful implementation of RL in recommender systems is challenging because of several factors, including the limited availability of online data for training on-policy methods. This scarcity requires expensive human interaction for online model training. Furthermore, the development of effective evaluation frameworks that accurately reflect the quality of models remains a fundamental challenge in recommender systems. To address these challenges, we propose a comprehensive framework for synthetic environments that simulate human behavior by harnessing the capabilities of large language models (LLMs). We complement our framework with in-depth ablation studies and demonstrate its effectiveness with experiments on movie and book recommendations. By utilizing LLMs as synthetic users, this work introduces a modular and novel framework for training RL-based recommender systems. The software, including the RL environment, is publicly available.
    PDF Code
  • ECCR Explainable and Coherent Complement Recommendation based on Large Language Models
    Zelong Li, Yan Liang, Ming Wang, Sungro Yoon, Jiaying Shi, Xin Shen, Xiang He, Chenwei Zhang, Wenyi Wu, Hanbo Wang, Jin Li, Jim Chan, Yongfeng Zhang
    Abstract
    Abstract: A complementary item is an item that pairs well with another item when consumed together. In the context of e-commerce, providing recommendations for complementary items is essential for both customers and stores. Current models for suggesting complementary items often rely heavily on user behavior data, such as co-purchase relationships. However, just because two items are frequently bought together does not necessarily mean they are truly complementary. Relying solely on co-purchase data may not align perfectly with the goal of making meaningful complementary recommendations. In this paper, we introduce the concept of "coherent complement recommendation", where "coherent" implies that recommended item pairs are compatible and relevant. Our approach builds upon complementary item pairs, with a focus on ensuring that recommended items are well used together and contextually relevant. To enhance the explainability and coherence of our complement recommendations, we fine-tune the Large Language Model (LLM) with coherent complement recommendation and explanation generation tasks since LLM has strong natural language explanation generation ability and multi-task fine-tuning enhances task understanding. We have also devised an LLM-compatible method for compressing and quantizing user behavior information into language model tokens. Experimental results indicate that our model can provide more coherent complementary recommendations than existing state-of-the-art methods, and human evaluation validates that our approach achieves up to a 48% increase in the coherent rate of complement recommendations.
    PDF Code
  • HindRec Aligning User Preferences for Recommendation via Hindsight Fine-tuning
    Yawen Zeng, huanwen wang, Lingyu Chen, Wenshu Chen, chenran, Hao Chen
    Abstract
    Abstract: Given a user's historical interaction sequence, the recommendation model strives to understand the user's preferences and predict potential candidate items. Presently, the surging popularity of large language models (LLMs) has birthed an array of generative recommendation systems. However, the unfortunate drawback of merging LLMs into recommendation systems is that it cannot capture the true preferences of users (i.e., likes and dislikes). In this paper, we venture to combine alignment techniques in LLMs to align interest preferences. Specifically, this paper first proposes the application of hindsight fine-tuning in generative recommendation model—referred to as HindRec—which includes three components: prompt construction, recommendation via LLM and hindsight fine-tuning. By constructing training data in the form of hindsight feedback, we fine-tune a LLM via a three-stage strategy to fully utilize positive and negative instances to align user preferences. Wide-ranging experimental corroboration of our HindRec has yielded truly significant outcomes.
    PDF Code
  • EDGE-Rec Efficient and Data-Guided Edge Diffusion For Recommender Systems Graphs
    Utkarsh Priyam, Hemit Shah, Edoardo Botta
    Abstract
    Abstract: Most recommender systems research focuses on binary historical user-item interaction encodings to predict future interactions. User features, item features, and interaction strengths remain largely under-utilized in this space or only indirectly utilized, despite proving largely effective in large-scale production recommendation systems. We propose a new attention mechanism, loosely based on the principles of collaborative filtering, called Row-Column Separable Attention $(\textbf{RCSA})$ to take advantage of real-valued interaction weights as well as user and item features directly. Building on this mechanism, we additionally propose a novel Graph Diffusion Transformer $(\textbf{GDiT})$ architecture which is trained to iteratively denoise the weighted interaction matrix of the user-item interaction graph directly. The weighted interaction matrix is built from the bipartite structure of the user-item interaction graph and corresponding edge weights derived from user-item rating interactions. Inspired by the recent progress in text-conditioned image generation, our method directly produces user-item rating predictions on the same scale as the original ratings by conditioning the denoising process on user and item features with a principled approach.
    PDF Code
  • Optimizing Novelty of Top-k Recommendations using Large Language Models and Reinforcement Learning
    Amit Sharma, Hua Li, Xue Li, Jian Jiao
    Abstract
    Abstract: Given an input query, a recommendation model is trained using user feedback data (e.g., click data) to output a ranked list of items. In real-world systems, besides accuracy, an important consideration for a new model is novelty of its top-k recommendations w.r.t. an existing deployed model. However, novelty of top-k items is a difficult goal to optimize a model for, since it involves a non-differentiable sorting operation on the model's predictions. Moreover, novel items, by definition, do not have any user feedback data. Given the semantic capabilities of large language models, we address these problems using a reinforcement learning (RL) formulation where large language models provide feedback for the novel items. However, given millions of candidate items, the sample complexity of a standard RL algorithm can be prohibitively high. To reduce sample complexity, we reduce the top-k list reward to a set of item-wise rewards and reformulate the state space to consist of tuples such that the action space is reduced to a binary decision; and show that this reformulation results in a significantly lower complexity when the number of items is large. We evaluate the proposed algorithm on improving novelty for a query-ad recommendation task on a large-scale search engine. Compared to supervised finetuning on recent pairs, the proposed RL-based algorithm leads to significant novelty gains with minimal loss in recall. We get similar results on the ORCAS dataset for matching queries to web pages.
    PDF Code
  • Active Users are Good Teachers, Behavior Sequence Infilling for Generative Recommendation
    Qijiong Liu, Xiaoyu DONG, Quanyu Dai, Jieming Zhu, Zhenhua Dong, Xiao-Ming Wu
    Abstract
    Abstract: The training of recommender systems relies on user behavior data, which are typically heavily unbalanced: active users, who generate more interactions, are likely to be favored in the recommendation process, while inactive users may receive unsatisfactory recommendations, resulting in low customer retention. To mitigate this problem, we propose beahvior sequence infilling (**BeSI**), a novel generative approach for sequential recommendation. BeSI seeks to fill the *trend gaps* in inactive user behavior sequences through the design of a trend gap meter to measure behavior coherence and a beahvior sequence infiller to generate smoother and richer behavior sequences. BeSI is model-agnostic and can be easily integrated with any existing sequential recommendation model. We will release data and code for other researchers to reproduce our results.
    PDF Code
  • Policy optimization of language models to align fidelity and efficiency of generative retrieval in multi-turn dialogues
    Jeremy Curuksu
    Abstract
    Abstract: Reinforcement learning from human preferences can fine tune language models for helpfulness and safety, but does not directly address the fidelity and efficiency of reasoning agents in multi-turn dialogues. We propose a method to improve the validity, coherence and efficiency of reasoning agents by defining a reward model as a mapping between predefined queries and tools which can be applied to any custom orchestration environment. The reward model is used for policy optimization to fine tune the clarification fallback behavior and help the agent learn when best to ask for clarifications in multi-turn dialogues. This is demonstrated in several orchestration environments where after fine tuning with either proximal policy optimization or verbal reinforcement, the new policy systematically identifies the correct intents and tools in < 2 steps in over 99% of all sampled dialogues.
    PDF Code
  • PERSOMA PERsonalized SOft ProMpt Adapter Architecture for Personalized Language Prompting
    Liam Hebert, Krishna Sayana, Ambarish Jash, Alexandros Karatzoglou, Sukhdeep Sodhi, Sumanth Doddapaneni, Yanli Cai, Dima Kuzmin
    Abstract
    Abstract: Understanding the nuances of a user's extensive interaction history is key to building accurate and personalized natural language systems that can adapt to evolving user preferences. To address this, we introduce PERSOMA, Personalized Soft Prompt Adapter architecture. Unlike previous personalized prompting methods for large language models, PERSOMA offers a novel approach to efficiently capture user history. It achieves this by resampling and compressing interactions as free form text into expressive soft prompt embeddings, building upon recent research utilizing embedding representations as input for LLMs. We rigorously validate our approach by evaluating various adapter architectures, first-stage sampling strategies, parameter-efficient tuning techniques like LoRA, and other personalization methods. Our results demonstrate PERSOMA's superior ability to handle large and complex user histories compared to existing embedding-based and text-prompt-based techniques.
    PDF Code
  • LLMs for User Interest Exploration in Large-scale Recommendation Systems
    Jianling Wang, Haokai Lu, Yifan Liu, He Ma, Yueqi Wang, Yang Gu, Shuzhou Zhang, Ningren Han, Shuchao Bi, Lexi Baugher, Ed H. Chi, Minmin Chen
    Abstract
    Abstract: Traditional recommendation systems are subject to a strong feedback loop by learning from and reinforcing past user-item interactions, which in turn limits the discovery of novel user interests. To address this, we introduce a hybrid hierarchical framework combining Large Language Models (LLMs) and classic recommendation models for user interest exploration. The framework controls the interfacing between the LLMs and the classic recommendation models through "interest clusters", the granularity of which can be explicitly determined by algorithm designers. It recommends the next novel interests by first representing "interest clusters" using language, and employs a fine-tuned LLM to generate novel interest descriptions that are strictly within these predefined clusters. At the low level, it grounds these generated interests to an item-level policy by restricting classic recommendation models, in this case a transformer-based sequence recommender to return items that fall within the novel clusters generated at the high level. We showcase the efficacy of this approach on an industrial-scale commercial platform serving billions of users. Live experiments show a significant increase in both exploration of novel interests and overall user enjoyment of the platform."
    PDF Code
  • Meta Knowledge for Retrieval Augmented Large Language Models
    Laurent Mombaerts, Tianyu Ding, Florian Felice, Adi Banerjee, Jonathan Taws, Tarik Borogovac
    Abstract
    Abstract: Retrieval Augmented Generation (RAG) is a technique used to augment Large Language Models (LLMs) with contextually relevant, time-critical, or domain-specific information without altering the underlying model parameters. However, constructing RAG systems that can effectively synthesize information from large and diverse set of documents remains a significant challenge. We introduce a novel data-centric RAG workflow for LLMs, transforming the traditional retrieve-then-read system into a more advanced prepare- then-rewrite-then-retrieve-then-read framework, to achieve higher domain expert-level understanding of the knowledge base. Our methodology relies on generating metadata and synthetic Questions and Answers (QA) for each document, as well as introducing the new concept of Meta Knowledge Summary (MK Summary) for metadata-based clusters of documents. The proposed innovations enable personalized user-query augmentation and in-depth information retrieval across the knowledge base. Our research makes two significant contributions: using LLMs as evaluators and em- ploying new comparative performance metrics, we demonstrate that (1) using augmented queries with synthetic question matching significantly outperforms traditional RAG pipelines that rely on document chunking (𝑝 < 0.01), and (2) meta knowledge-augmented queries additionally significantly improve retrieval precision and recall, as well as the final answer’s breadth, depth, relevancy, and specificity. Our methodology is cost-effective, costing less than $20 per 2000 research papers using Claude 3 Haiku, and can be adapted with any fine-tuning of either the language or embedding models to further enhance the performance of end-to-end RAG pipelines.
    PDF Code
  • Survey for Landing Generative AI in Social and E-commerce Recsys – the Industry Perspectives
    Da Xu, Danqing Zhang, Guangyu Yang, bo yang, Shuyuan Xu, lingling zheng, Cindy Liang
    Abstract
    Abstract: Recently, generative AI (GAI), with their emerging capabilities, have presented unique opportunities for augmenting and revolutionizing industrial recommender systems (Recsys). Despite growing research efforts at the intersection of these fields, the integration of GAI into industrial Recsys remains in its infancy, largely due to the intricate nature of modern industrial Recsys infrastructure, operations, and product sophistication. Drawing upon our experiences in successfully integrating GAI into several major social and e-commerce platforms, this survey aims to comprehensively examine the underlying system and AI foundations, solution frameworks, connections to key research advancements, as well as summarize the practical insights and challenges encountered in the endeavor to integrate GAI into industrial Recsys. As pioneering work in this domain, we hope outline the representative developments of relevant fields, shed lights on practical GAI adoptions in the industry, and motivate future research.
    PDF Code
  • Evaluation of Refined Conversational Recommendation Based on Reprompting ChatGPT with Feedback
    Kyle Dylan Spurlock, Cagla Acun, Esin Saka, Olfa Nasraoui
    Abstract
    Abstract: Recommendation algorithms seldom consider direct user input, resulting in superficial interaction between them despite efforts to include the user through conversation. Recently, Large Language Models (LLMs) have gained popularity across a number of domains for their extensive knowledge and transfer learning capabilities. For instance, ChatGPT boast impressive interactivity and an easy-to-use interface. In this paper, we investigate the effectiveness of ChatGPT as a top-n conversational recommendation system. We build a rigorous evaluation pipeline to simulate how a user might realistically probe the model for recommendations: by first instructing and then reprompting with feedback to refine the recommendations. We further explore the effect of popularity bias in ChatGPT's recommendations, and compare its performance to baseline recommendation models. We find that reprompting with feedback is an effective strategy to improve recommendation relevancy, and that popularity bias can be mitigated through prompt engineering.
    PDF Code

Organizers

Narges Tabari:

Narges Tabari:
AWS AI Labs

Bio
Bio: Narges Tabari is an Applied Scientist working at AWS AI Labs. She received her PhD in 2018 in Computer Science at the University of North Carolina. She mainly wroks towards applications of NLP, from sentiment analysis, emotion detection, summarization, text generation, and intersection of NLP with recommender systems and personalization. Before joining Amazon, she was a Research Scientist at the University of Virginia and an NLP Engineer at Genentech. She has served as Session Chair for NAACL 2022 Industry Track, and has extensive experience reviewing for conferences such as NAACL, AAAI, and ACL.
Aniket Deshmukh

Aniket Deshmukh
AWS AI Labs

Bio
Bio: Aniket is an Applied Scientist at AWS AI Labs, focusing on recommendation systems and large language models. Previously, as a Senior Applied Scientist at Microsoft AI and Research, he contributed to Microsoft Advertising by working on multimedia ads, smart campaigns, and auto-bidding projects. Aniket earned his PhD in Electrical and Computer Engineering from the University of Michigan, Ann Arbor, focusing on domain generalization and reinforcement learning. He is an active contributor to the academic community, regularly reviewing for conferences such as NeurIPS, ICML, CVPR, AISTATS, and JMLR, and has been recognized as a top reviewer at NeurIPS in 2021 and 2023, as well as AISTATS in 2022. Aniket has experience in organizing workshops at conferences like ICLR and WWW.
Wang-Cheng Kang

Wang-Cheng Kang
Google DeepMind

Bio
Bio: Dr. Wang-Cheng Kang is a Staff Research Engineer at Google DeepMind, working on LLM/GenAI for RecSys and LLM data efficiency. He held a PhD in Computer Science from UC San Diego, and interned at Adobe Research, Pinterest Labs, and Google Brain, focusing on recommender systems. He received RecSys’17 Best Paper Runner-up, and proposed SASRec, the first Transformer-based recommendation method.
Rashmi Gangadharaiah

Rashmi Gangadharaiah
AWS AI Labs

Bio
Bio: Dr. Rashmi Gangadharaiah is a Principal Machine Learning scientist in AWS AI, Amazon. She currently works in the area of Conversational AI, focused on task-oriented dialog systems. She has previously worked on applications in the areas of healthcare analytics, question answering, information retrieval - especially from social media, machine translation and speech science. She was previously a Research Staff Member at IBM Research where she worked on knowledge discovery, drug safety and interactive dialog systems in customer support settings. She was also a postdoctoral scholar at UCSD where she worked with several infectious disease doctors to build an interactive decision support system for differential diagnosis. Dr. Gangadharaiah earned her PhD in information technology, artificial intelligence, and machine learning from Carnegie Mellon University. She has experience organizing workshops (NLP4MC at ACL’20, NLP4MC at NAACL’21) and Industry Tracks (NAACL’22 Industry Track Chair) at top-tier NLP/ML conferences.
Hamed Zamani

Hamed Zamani
University of Massachusetts Amherst

Bio
Bio: Hamed Zamani is an Assistant Professor at the University of Massachusetts Amherst, where he also serves as the Associate Director of the Center for Intelligent Information Retrieval (CIIR), one of the top academic research labs in Information Retrieval worldwide. Prior to UMass, he was a Researcher at Microsoft working on search and recommendation problems. His research focuses on designing and evaluating (interactive) information access systems, including search engines, recommender systems, and question answering. His work has led to over 85 refereed publications in the field and has led to pioneering work on LLM personalization. He is a recipient of the NSF CAREER Award, ACM SIGIR Early Career Excellence in Research and Community Engagement awards, and Amazon Research Award. He is an Associate Editor of the ACM Transactions on Information Systems (TOIS), has organized multiple workshops at the SIGIR, RecSys, WSDM, and WWW conferences, and served as a PC Chair at SIGIR 2022 (Short Papers)
Julian McAuley

Julian McAuley
University of California, San Diego

Bio
Bio: Julian McAuley has been a professor in the Computer Science Department at the University of California, San Diego since 2014. Previously he was a postdoctoral scholar at Stanford University after receiving his PhD from the Australian National University in 2011. His research is concerned with developing predictive models of human behavior using large volumes of online activity data. He has organized a large number of workshops, including workshops on recommendation, e-commerce, and natural language processing.
George Karypis

George Karypis
University of Minnesota

Bio
Bio: Dr. George Karypis is a Senior Principal Scientist at AWS AI and a Distinguished McKnight University Professor and William Norris Chair in Large Scale Computing at the Department of Computer Science & Engineering at the University of Minnesota. His research interests span the areas of data mining, machine learning, high performance computing, information retrieval, collaborative filtering, bioinformatics, cheminformatics, and scientific computing. His research has resulted in the development of software libraries for serial and parallel graph partitioning (METIS and ParMETIS), hypergraph partitioning (hMETIS), for parallel Cholesky factorization (PSPASES), for collaborative filtering-based recommendation algorithms (SUGGEST), clustering high dimensional datasets (CLUTO), finding frequent patterns in diverse datasets (PAFI), and for protein secondary structure prediction (YASSPP). He has coauthored over 350 papers on these topics and two books (“Introduction to Protein Structure Prediction: Methods and Algorithms” (Wiley, 2010) and “Introduction to Parallel Computing” (Publ. Addison Wesley, 2003, 2nd edition)). He is serving on the program committees of many conferences and workshops on these topics, and on the editorial boards of the IEEE Transactions on Knowledge and Data Engineering, ACM Transactions on Knowledge Discovery from Data, Data Mining and Knowledge Discovery, Social Network Analysis and Data Mining Journal, International Journal of Data Mining and Bioinformatics, the journal on Current Proteomics, Advances in Bioinformatics, and Biomedicine and Biotechnology. He is a Fellow of the IEEE.

Program Committee

  • ABC (XYZ University)