Проблема оптимизации вычислительного ресурса в архитектуре нейронных сетей
Keywords:
neural network, system architecture, machine learning, artificial intelligence, genetic algorithmsAbstract
In this article the authors consider the problem of the growth of computing resources necessary for training modern models of neural networks. To optimize the computing resource, it is proposed to investigate methods for creating non-connected architectures that do not require training parameters that will not be used after. The possibility of such an approach is shown in two different studies in this field.
References
Эмма Струбел, Ананья Ганеш, Эндрю МкКалум Энергетические и политические соображения в глубоком обучении в NLP//arXiv.org – 2019 - https://arxiv.org/abs/1906.02243
Адам Гайер, Дэвид Ха Нечувствительные к весам нейронный сети//arXiv.org – 2019 - https://arxiv.org/abs/1906.04358
Саининг Кси, Александр Кириллов, Росс Гиршик, Каиминг Хе Ис-следование случайно соединенных нейронных сетей для распознава-ния изображений//arXiv.org - 2019 https://arxiv.org/abs/1904.01569
Downloads
Published
How to Cite
Issue
Section
License
The author transfers for a period of 5 years to the Central Research Institute of Russian Sign Language non-exclusive rights to use the article in any form and in any way specified in Article 1270 of the Civil Code of the Russian Federation. The transfer of rights occurs at the time of downloading any materials through an automated system on this site.