We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
学習率不要で最適化を行う手法を提案. 提案手法は,最適化問題をコイン投げのギャンブルとして扱う[Orabona and Pál, 2016]の拡張になっている.
NeurIPS2017 https://proceedings.neurips.cc/paper/2017/hash/7c82fab8c8f89124e2ce92984e04fb40-Abstract.html
Francesco Orabona (Stony Brook University), Tatiana Tommasi (Rome University)
2017/12/04
これは例えばAdaGradにおいて最適な学習率の初期値を選択した場合に一致する.
The text was updated successfully, but these errors were encountered:
実装しました. https://github.com/nocotan/cocob_backprop
Sorry, something went wrong.
nocotan
No branches or pull requests
一言でいうと
学習率不要で最適化を行う手法を提案.
提案手法は,最適化問題をコイン投げのギャンブルとして扱う[Orabona and Pál, 2016]の拡張になっている.
論文リンク
NeurIPS2017
https://proceedings.neurips.cc/paper/2017/hash/7c82fab8c8f89124e2ce92984e04fb40-Abstract.html
著者/所属機関
Francesco Orabona (Stony Brook University), Tatiana Tommasi (Rome University)
投稿日付(yyyy/MM/dd)
2017/12/04
概要
新規性・差分
手法
これは例えばAdaGradにおいて最適な学習率の初期値を選択した場合に一致する.
結果
コメント
The text was updated successfully, but these errors were encountered: