July 30, 2020 Xi'an, China

The 3rd International Workshop on ExplainAble Recommendation and Search (EARS 2020)

Co-located with The 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval

About

The motivation of the workshop is to promote the research and application of Explainable Recommendation and Search, under the background of Explainable AI in a more general sense. Explainable recommendation and search attempt to develop models or methods that not only generate high-quality recommendation or search results, but also intuitive explanations of the results for users or system designers, which can help improve the system transparency, persuasiveness, trustworthiness, and effectiveness, etc.

In a broader sense, researchers in the whole artificial intelligence community have also realized the importance of Explainable AI, which aims to address a wide range of AI explainability problems in deep learning, computer vision, automatic driving systems, and natural language processing tasks. Recent AI regulations such as EU GDPR and California Consumer Privacy Act of 2018 also encourage the explainability and users' right to explanation of algorithmic decisions in AI systems. As an important branch of AI research, it highlights the importance our IR/RecSys community to address the explainability issues of various recommendation and search systems.

We welcome contributions of both long and short papers from a wide range of topics, including but not limited to explainable recommendation and search models, incorporating multi-modal information for explanation, evaluation of explainable recommendation and search, user study for explainable recommendation and search, etc. More topics are listed in the call for papers. Papers must be submitted to easychair at https://easychair.org/conferences/?conf=ears2020 by 23:59, AoE (Anywhere on Earth) on May 26, 2020.

EARS'20 (co-located with SIGIR'20)
Xi'an, China

Program

The information for joining the workshop:
Join the zoom here
Meeting ID: 920 1010 9707
Passcode: 871758
The time zone is: GMT+8
9:30 – 9:45 Opening and Workshop Introduction
9:45 – 10:45 Invited talk: Explainable Recommendation: Perspectives from Knowledge Graph Reasoning and Natural Language Generation (Dr. Xiting Wang, Microsoft Research Asia)
10:45 – 11:15 Paper talk: Mining Implicit Entity Preference from User-Item Interaction Data for Knowledge Graph Completion via Adversarial Learning (Gaole He, Junyi Li, Wayne Xin Zhao, Peiju Liu and Ji-Rong Wen)
11:15 – 14:30 Rest
14:30 – 15:30 Invited talk: Interpretability of Search Models (Prof. Avishek Anand, Leibniz University of Hannover)
15:30 – 16:30 Invited talk: User-Centric Design and Evaluation of Explanation for Recommendation (Prof. Li Chen, from Hong Kong Baptist University)
16:45 – 17:15 Paper talk: Helping results assessment by adding explainable elements to the deep relevance matching model (Ioannis Chios and Suzan Verberne)
17:15 – 17:45 Paper talk: Explaining black-box neural ranking models using sentence impact analysis (Lars Holdijk, Maurice Verbrugge, Emma Gerritse and Arjen de Vries)
17:45 – 21:00 Rest
21:00 – 21:30 Paper talk: Generating medical screening questionnaires through analysis of social media data (Ortal Ashkenazi, Elad Yom-Tov and Liron Vardi David)
21:30 – 22:00 Paper talk: Toward Explainability in Professional Search (Tony Russell-Rose and Andrew MacFarlane)

Keynote Speech: Explainable Recommendation: Perspectives from Knowledge Graph Reasoning and Natural Language Generation [slides]

Dr. Xiting Wang, Microsoft Research Asia, 9:45 - 10:45

Abstract: Explainable recommendation, which provides explanations about why an item is recommended, has attracted increasing attention. In this talk, we discuss definitions, goals, application scenarios, and recent advances of explainable recommendation, with a focus on knowledge graph reasoning and natural language generation (NLG). Knowledge graph reasoning aims to extract multi-hop paths from the knowledge graph that connect users with the recommended items, while NLG-based methods produce highly personalized explanations word by word by leveraging NLG models like RNNs or Transformer. We will introduce in detail our recent progress on these two directions, e.g., how we leverage reinforcement learning and demonstrations for knowledge graph reasoning, and how multi-task learning can be used for generating high-quality recommendation explanations.

Bio: Xiting Wang is a senior researcher at Microsoft Research Asia. Her research interests include explainable recommendation and explainable machine learning. Xiting has successfully applied explainable recommendation methods on multiple products in Microsoft, including Ad Generative Optimization and Keyword Generation for Bing Ads and Native Ads. She has also published papers on model explainability in reputable conferences and is a program committee member of top conferences such as the WebConf, AAAI, and IJCAI.

Keynote Speaker: Dr. Xiting Wang, Microsoft Research Asia

Keynote Speech: Interpretability of Search Models [slides]

Prof. Avishek Anand, Leibniz University of Hannover, 14:30 – 15:30

Abstract: Interpreting the predictions of a machine learning (ML) model is an important objective for improving trust and accountability in learning systems. In this talk, we look at ML systems used in information retrieval system, that is ranking models. A key problem in information retrieval is understanding the latent intention of a user’s under-specified query. Ranking models that are able to correctly uncover the query intent often perform well on the document ranking task. We will look at the problem of interpretability for text based ranking models by trying to unearth the query intent as understood by complex retrieval models. Further, we will look into the challenges in building interpretable systems for search models for Information retrieval and opportunities for the IR community to work on.

Bio: Avishek Anand is an Assistant Professor in the Leibniz University of Hannover and a member of the L3S Research Center. His research broadly falls in the intersection of Machine learning on Web and information retrieval problems. Specifically, he has worked on designing scalable algorithms to improve search and graph representation algorithms for the Web. Recently, he became interested in interpretability of retrieval models. That is, how can we better understand the rationale behind predictions of a black-box retrieval model ? His research is supported by Amazon research awards and Schufa. He holds a PhD in computer science from the Max Planck Insitute for Informatics, Saarbrücken and has also been a visiting scholar in Amazon.

Keynote Speaker: Prof. Avishek Anand, Leibniz University of Hannover.

Keynote Speech: User-Centric Design and Evaluation of Explanation for Recommendation [slides]

Prof. Li Chen, Hong Kong Baptist University, 15:30 – 16:30

Abstract: Explanation has been increasingly recognized important and crucial to recommender systems, given that it can help increase the system’s transparency and persuasiveness. In recent years, various types of explanation techniques have been proposed, ranging from visualization diagrams to natural language (NL) outputs. However, most of existing work has emphasized on algorithm development, by adopting standard offline metrics to evaluate the generated explanation’s performance (such as BLEU and ROUGE for explanations in NL). The explanation’s practical benefits to end users have less been measured and identified. In this talk, I will mainly introduce our experiences from designing and conducting user-centric evaluations of recommendation explanation. Based on a series of empirical studies performed from users’ perspective, we have derived a set of guidelines not only for enhancing explanation interface design in terms of inspiring user trust and improving their decision quality, but also being suggestive to related researchers as for the experimental procedure of carrying out an effective user study.

Bio: Dr. Li Chen is currently an Associate Professor in Department of Computer Science at Hong Kong Baptist University (HKBU). She obtained her PhD degree in Computer Science from Swiss Federal Institute of Technology in Lausanne (EPFL), Switzerland, and Bachelor and Master degrees from Peking University, China. Her recent research focus has mainly been on recommender system design and development, which integrates approaches of multiple disciplines including artificial intelligence, human-computer interaction, user modeling, and user behavior analytics. The application domains cover social media, e-commerce, and online education. She has authored and co-authored over 100 publications, most of which have appeared in refereed high-impact journals (e.g., IJHCS, TOCHI, UMUAI, TIST, TIIS, KNOSYS, Behavior & Information Technology, AI magazine, AI Communications, and IEEE Intelligent Systems) and conferences (e.g., WWW, IUI, CHI, CIKM, SIGKDD, SDM, IJCAI, AAAI, ACM RecSys, ACM UMAP, Interact), with over 5,000 citations so far. Her co-authored paper receives UMUAI 2018 James Chen Best Paper Award. She is now ACM senior member, steering committee member of ACM Conference on Recommender Systems (RecSys), editorial board member of User Modeling and User-Adapted Interaction Journal (UMUAI), editorial board member of Journal of Intelligent Information Systems (JIIS), and associate editor of ACM Transactions on Interactive Intelligent Systems (TiiS). She has also served as program co-chair of ACM RecSys'20 and ACM UMAP'18.

Keynote Speaker: Prof. Li Chen, Hong Kong Baptist University

Call for Papers

We welcome contributions of both long and short papers from a wide range of topics, including but not limited to the following topics of interest:

  1. New Models for Explainable Recommendation and Search
    • Explainable shallow models for recommendation and search
    • Explainable neural models for recommendation and search
    • Explainable sequential modeling
    • Explainable optimization algorithms and theories
    • Causal inference for explainable recommendation
  2. Using Different Information Sources for Explanation
    • Text-based modeling and explanation
    • Image-based modeling and explanation
    • Using knowledge-base for explanation
    • Audio/Video-based modeling and explanation
    • Integrating heterogenous information for explanation
  3. User Behavior Analysis and HCI for Explanation
    • Explanation and user satisfaction
    • Eye tracking and attention modeling
    • Mouse movement analysis
  4. New Types of Explanations for Search and Recommendation
    • Textual sentence explanations
    • Visual explanations
    • Statistic-based explanations
    • Aggregated explanations
    • Context-aware explanations
  5. Evaluation of Explainable Recommendation and Search
    • Offline evaluation measures and protocols
    • Online evaluation measures and protocols
    • User study for explanation evaluation
  6. Applications of Explainable Recommendation and Search
    • Explainable product search and recommendation
    • Explainable web search
    • Explainable social recommendation
    • Explainable news recommendation
    • Explainable point-of-interest recommendation
    • Explainable multi-media search and recommendation

PAPER SUBMISSION GUIDLINES

EARS 2020 paper submissions can either be long (maximum 9 pages plus reference) or short (maximum 4 pages plus reference). Each accepted paper (no matter short or long) will have an oral presentation in a plenary session, and will also be allocated a presentation slot in a poster session to encourage discussion and follow up between authors and attendees.

EARS 2020 submissions are double-blind. All submissions and reviews will be handled electronically. EARS 2020 submissions should be prepared according to the standard double-column ACM SIG proceedings format. Additional information about formatting and style files is available on the ACM website. Papers must be submitted to easychair at https://easychair.org/conferences/?conf=ears2020 by 23:59, AoE (Anywhere on Earth) on May 26, 2020.

For inquires about the workshop and submissions, please email to ears2020@easychair.org

Important Days

All time are 23:59, AoE (Anywhere on Earth)
May 26, 2020: Submission due
June 07, 2020: Paper notification
June 30, 2020: Camera ready submission
July 30, 2020: Workshop day

Workshop Co-Chairs

Image

Yongfeng Zhang Rutgers University

Image

Xu Chen University College London

Image

Yi Zhang UC Santa Cruz

Image

Min Zhang Tsinghua University

Image

Chirag Shah University of Washington

THE VENUE

EARS'20 will be co-located with The 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval, to be held at Xi'an, China on July 30, 2020.