Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Except I think there’s convincing argument to make that engagement will go down over time, if the algorithm makes no attempt to prioritize or suggest novel content.

The rare occasions I discover a new channel, it’s almost always from some source other than the algorithm: a referral from a friend, this site, another YouTuber, etc. My viewership of the same repetitive roster of videos absolutely tails off until I find something new from elsewhere.

For example, in months of being subscribed to my mechanics [0] (who does incredibly engrossing and relaxing restorations of mechanical stuff), not once was I suggested a video from Baumgartner Restoration [1], an art conservator who produces videos with a similar attention to detail and high production value.

Thematically this should be an easy recommendation for YouTube to make, but evidently the content is just different enough that it scores as a false-negative. After finding the latter channel independently, my viewing time absolutely rose for a while.

In theory, YouTube ought to be able to detect and learn from this signal of non-algorithmic discovery of new content. Yet, here we are.

[0]: https://www.youtube.com/c/mymechanics

[1]: https://www.youtube.com/c/BaumgartnerRestoration



Stagnation is a known problem in reinforcement learning and similar methods. It's very easy to get stuck at a local maximum. My favorite fun example is https://gym.openai.com/envs/BipedalWalkerHardcore-v2/ where a standard DDPG(https://arxiv.org/abs/1509.02971) will get stuck at pits in the environment. Although it could get a higher score if it learned to jump, there is a penalty with falling in that makes it stabilize on standing still and running out the timer. Video: https://www.youtube.com/watch?v=DEGwhjEUFoI

I suspect there is something similar going on with video/music recommendations. When a bad novel suggestion is made the penalty is likely too high to overcome (User immediately clicks off) with traditional reinforcement methods.


I agree with you, I have the same feeling about Spofity, it's algorithm just doesn't work for me, I have to search somewhere else for recommendations.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: