使用机器学习和AI构建推荐系统

使用机器学习和AI构建推荐系统

Automated recommendations are everywhere: Netflix, Amazon, YouTube, and more. Recommender systems learn about your unique interests and show the products or content they think you’ll like best. Discover how to build your own recommender systems from one of the pioneers in the field. Frank Kane spent over nine years at Amazon, where he led the development of many of the company’s personalized product recommendation technologies. In this course, he covers recommendation algorithms based on neighborhood-based collaborative filtering and more modern techniques, including matrix factorization and even deep learning with artificial neural networks. Along the way, you can learn from Frank's extensive industry experience and understand the real-world challenges of applying these algorithms at a large scale with real-world data. You can also go hands-on, developing your own framework to test algorithms and building your own neural networks using technologies like Amazon DSSTNE, AWS SageMaker, and TensorFlow.

Topics include:

  • Top-N recommender architectures
  • Types of recommenders
  • Python basics for working with recommenders
  • Evaluating recommender systems
  • Measuring your recommender
  • Reviewing a recommender engine framework
  • Content-based filtering
  • Neighborhood-based collaborative filtering
  • Matrix factorization methods
  • Deep learning basics
  • Applying deep learning to recommendations
  • Scaling with Apache Spark, Amazon DSSTNE, and AWS SageMaker
  • Real-world challenges and solutions with recommender systems
  • Case studies from YouTube and Netflix
  • Building hybrid, ensemble recommenders

课程信息

  • 英文名称:Building Recommender Systems with Machine Learning and AI
  • 时长:9小时5分
  • 字幕:英语

课程目录

  1. Install Anaconda, review course materials, and create movie recommendations
  2. Course roadmap
  3. Understanding you through implicit and explicit ratings
  4. Top-N recommender architecture
  5. Review the basics of recommender systems
  6. Data structures in Python
  7. Functions in Python
  8. Booleans, loops, and a hands-on challenge
  9. Train/test and cross-validation
  10. Accuracy metrics (RMSE and MAE)
  11. Top-N hit rate: Many ways
  12. Coverage, diversity, and novelty
  13. Churn, responsiveness, and A/B tests
  14. Review ways to measure your recommender
  15. Walkthrough of RecommenderMetrics.py
  16. Walkthrough of TestMetrics.py
  17. Measure the performance of SVD recommendations
  18. Our recommender engine architecture
  19. Recommender engine walkthrough, part 1
  20. Recommender engine walkthrough, part 2
  21. Review the results of our algorithm evaluation
  22. Content-based recommendations and the cosine similarity metric
  23. K-nearest neighbors (KNN) and content recs
  24. Producing and evaluating content-based movie recommendations
  25. Bleeding edge alert: Mise-en-scene recommendations
  26. Dive deeper into content-based recommendations
  27. Measuring similarity and sparsity
  28. Similarity metrics
  29. User-based collaborative filtering
  30. User-based collaborative filtering: Hands-on
  31. Item-based collaborative filtering
  32. Item-based collaborative filtering: Hands-on
  33. Tuning collaborative filtering algorithms
  34. Evaluating collaborative filtering systems offline
  35. Measure the hit rate of item-based collaborative filtering
  36. KNN recommenders
  37. Running user- and item-based KNN on MovieLens
  38. Experiment with different KNN parameters
  39. Bleeding edge alert: Translation-based recommendations
  40. Principal component analysis (PCA)
  41. Singular value decomposition (SVD)
  42. Running SVD and SVD++ on MovieLens
  43. Improving on SVD
  44. Tune the hyperparameters on SVD
  45. Bleeding edge alert: Sparse linear methods (SLIM)
  46. Deep learning introduction
  47. Deep learning prerequisites
  48. History of artificial neural networks
  49. Playing with TensorFlow
  50. Training neural networks
  51. Tuning neural networks
  52. Introduction to TensorFlow
  53. Handwriting recognition with TensorFlow, part 1
  54. Handwriting recognition with TensorFlow, part 2
  55. Introduction to Keras
  56. Handwriting recognition with Keras
  57. Classifier patterns with Keras
  58. Predict political parties of politicians with Keras
  59. Intro to convolutional neural networks (CNNs)
  60. CNN architectures
  61. Handwriting recognition with CNNs
  62. Intro to recurrent neural networks (RNNs)
  63. Training recurrent neural networks
  64. Sentiment analysis of movie reviews using RNNs and Keras
  65. Intro to deep learning for recommenders
  66. Restricted Boltzmann machines (RBMs)
  67. Recommendations with RBMs, part 1
  68. Recommendations with RBMs, part 2
  69. Evaluating the RBM recommender
  70. Tuning restricted Boltzmann machines
  71. Exercise results: Tuning a RBM recommender
  72. Auto-encoders for recommendations: Deep learning for recs
  73. Recommendations with deep neural networks
  74. Clickstream recommendations with RNNs
  75. Get GRU4Rec working on your desktop
  76. Exercise results: GRU4Rec in action
  77. Bleeding edge alert: Deep factorization machines
  78. More emerging tech to watch
  79. Introduction and installation of Apache Spark
  80. Apache Spark architecture
  81. Movie recommendations with Spark, matrix factorization, and ALS
  82. Recommendations from 20 million ratings with Spark
  83. Amazon DSSTNE
  84. DSSTNE in action
  85. Scaling up DSSTNE
  86. AWS SageMaker and factorization machines
  87. SageMaker in action: Factorization machines on one million ratings, in the cloud
  88. The cold start problem (and solutions)
  89. Implement random exploration
  90. Exercise solution: Random exploration
  91. Stoplists
  92. Implement a stoplist
  93. Exercise solution: Implement a stoplist
  94. Filter bubbles, trust, and outliers
  95. Identify and eliminate outlier users
  96. Exercise solution: Outlier removal
  97. Fraud, the perils of clickstream, and international concerns
  98. Temporal effects and value-aware recommendations
  99. Case study: YouTube, part 1
  100. Case study: YouTube, part 2
  101. Case study: Netflix, part 1
  102. Case study: Netflix, part 2
  103. Hybrid recommenders and exercise
  104. Exercise solution: Hybrid recommenders
  105. More to explore

评论