Large Scale Ranking Using Stochastic Gradient Descent
Keywords:ranking, stochastic gradient descent, east-squares, gradient methods, stochastic optimization
A system of linear equations can represent any ranking problem that minimizes a pairwise ranking loss. We utilize a fast version of gradient descent algorithm with a near-optimal learning rate and momentum factor to solve this linear equations system iteratively. Tikhonov regularization is also integrated into this framework to avoid overfitting problems where we have very large and high dimensional but sparse data.
How to Cite
LicenseCopyright (c) 2022 Proceedings of the Bulgarian Academy of Sciences
Copyright (c) 2022 Proceedings of the Bulgarian Academy of Sciences
Copyright is subject to the protection of the Bulgarian Copyright and Associated Rights Act. The copyright holder of all articles on this site is Proceedings of the Bulgarian Academy of Sciences. If you want to reuse any part of the content, please, contact us.