Abstract

Recommender system (RecSys) [3], which attempts to find what items a particular user wants, is an important area in data mining and machine learning. However, as the recommender task is getting more diverse and the recommending models are growing more complicated, it is more and more difficult to develop a proper RecSys that can adapt well to a new recommender task. Recently, the automated machine learning (AutoML) [1, 2], which targets at easing the usage of learning tools and designing task-dependent learning models, has become an important and popular area with both practical needs and research values.

Modern RecSys contains two phases, matching, and ranking (Figure 1). In the first phase, collaborative filtering (CF) models are employed to extract thousands of items from billion-scale items. The data input of this phase is mainly the user-item interaction history. Thus, the key to CF models is to compute the similarity between a user and an item through a designed interaction function. Considering the various task settings, i.e. datasets and evaluation metrics, the best choice of interaction function can vary, which can be searched by AutoML. In the second phase, feature-based recommender models, also known as click-through rate (CTR) prediction models, are utilized to further rank the output of the matching model. One of the most significant operations in this phase is to generate effective features for users and items given the input data. Therefore, AutoML-based feature generation methods can be deployed. From a wider perspective, the model training in both two phases is faced with the high cost of hyper-parameter tuning. Hyper-parameter optimization, a major direction of AutoML can help reduce human efforts in tuning recommendation models. On the other hand, since the the user-item interactions can be naturally modeled as a bipartite graph, graph neural networks (GNNs) have been widely used in RecSys. Then, AutoML, especially neural architecture search (NAS), can help reduce such efforts. Above all, AutoML can help to build RecSys from these four aspects (see red texts in Figure 1).

My Photo

Figure 1: How AutoML and RecSys integrate and mutually benefit with each other.

Schedule

Time Event
8:00-8:40 Part 1: An introduction to Automated Machine Learning (AutoML). [Slides]
  Speaker: Quanming Yao
8:40-9:20 Part 2: Why AutoML is Needed in RecSys and Recent Advances [Slides]
  Speaker: Chen Gao
9:20-9:30 Break
9:30-10:10 Part 3: Automated Graph Neural Network for RecSys. [Slides]
  Speaker: Huan Zhao
10:10-10:50 Part 4: Automated Knowledge Graph Embedding. [Slides]
  Speaker: Yongqi Zhang
10:50-11:00 Part 5: Discussion

Organizers

Quanming Yao, EE Department, Tsinghua University / 4Paradigm Inc. Beijing. China.

Yong Li, Department of Electronic Engineering, Tsinghua University, Beijing. China.

Chen Gao, Department of Electronic Engineering, Tsinghua University, Beijing. China.

Huan Zhao, 4Paradigm Inc. Beijing. China.

Yongqi Zhang, 4Paradigm Inc. Beijing. China.

Past Tutorial

References

Due to the space limitation, we only list highly-related papers.

  1. F. Hutter, L. Kotthoff, and J. Vanschoren, "Automated machine learning: methods, systems, challenges", Springer Nature, 2019.
  2. Q. Yao and M. Wang, “Taking human out of learning applications: A survey on automated machine learning”, tech. rep., arXiv preprint, 2018.
  3. P. Resnick and H. R. Varian, “Recommender systems”, CACM, 1997.
  4. T. N. Kipf and M. Welling, “Semi-supervised classification with graph convolutional networks”, in ICLR, 2016.
  5. H. Zhao, Q. Yao, J. Li, Y. Song, and D. Lee, “Meta-graph based recommendation fusion over heterogeneous information networks”, in SIGKDD, 2017.
  6. Q. Yao, X. Chen, J. T. Kwok, Y. Li, and C.-J. Hsieh, “Efficient neural interaction function search for collaborative filtering”, in WebConf, 2020.
  7. Y. Luo, M. Wang, H. Zhou, Q. Yao, W.-W. Tu, Y. Chen, W. Dai, and Q. Yang, “AutoCross: Automatic feature crossing for tabular data in real-world applications”, in SIGKDD, 2019.
  8. Y. Zheng, C. Gao, L. Chen, D. Jin, and Y. Li, “DGCN: Diversified recommendation with graph convolutional networks”, WebConf, 2021.
  9. B. Jin, C. Gao, X. He, D. Jin, and Y. Li, “Multi-behavior recommendation with graph convolutional networks”, in SIGIR, 2020.
  10. S. Liu, C. Gao, Y. Chen, D. Jin, and Y. Li, “Learnable embedding sizes for recommender systems”, in ICLR, 2021.
  11. R. Ying, R. He, K. Chen,P. Eksombatchai, W. Hamilton, and J. Leskovec, "Graph convolutional neural networks for web-scale recommender systems", in KDD, 2018.
  12. X. Wang, X. He, M. Wang, F. Feng, and T.S. Chua. 2019. "Neural graph collaborative filtering", in SIGIR, 2019.
  13. Y. Gao, H. Yang, P. Zhang, C. Zhou, and Y. Hu, "GraphNAS: Graph neural architecture search with reinforcement learning", in IJCAI, 2020.
  14. J. You, R. Ying, and J. Leskovec, "Design Space for Graph Neural Networks", in NeurIPS, 2020.
  15. H. Zhao, L. Wei, and Q. Yao, "Simplifying Architecture Search for Graph Neural Network", in CIKM-CSSA, 2020.
  16. H. Zhao, Q. Yao, and W. Tu, "Search to aggregate neighborhood for graph neural network", in ICDE, 2021.
  17. Z. Zhang, X. Wang, and W. Zhu, "Automated Machine Learning on Graphs: A Survey", in IJCAI, 2021.
  18. S. Ji, S. Pan, E. Cambria, P. Marttinen and P. S. Yu, “A Survey on Knowledge Graphs: Representation, Acquisition and Applications”, in TNNLS 2021.
  19. Z. Sun, Z. Deng, J. Nie and J. Tang, “RotatE: Knowledge Graph Embedding by Relational Rotation in Complex Space”, in ICLR 2019.
  20. X. Wang, X. He, Y. Cao, M. Liu and T. Chua, “KGAT: Knowledge Graph Attention Network for Recommendation”, in KDD 2019.
  21. Y. Zhang, Q. Yao, W. Dai and L. Chen, “AutoSF: Searching Scoring Functions for Knowledge Graph Embedding”, in ICDE 2020.
  22. Y. Zhang, Q. Yao and L. Chen, “Interstellar: Searching Recurrent Architecture for Knowledge Graph Embedding”, in NeurIPS 2020.