Deep Neural Networks in a Mathematical Framework

Deep Neural Networks in a Mathematical Framework
-0 %
Der Artikel wird am Ende des Bestellprozesses zum Download zur Verfügung gestellt.
 eBook
Sofort lieferbar | Lieferzeit: Sofort lieferbar

Unser bisheriger Preis:ORGPRICE: 69,54 €

Jetzt 69,53 €* eBook

Artikel-Nr:
9783319753041
Veröffentl:
2018
Einband:
eBook
Seiten:
84
Autor:
Anthony L. Caterini
Serie:
SpringerBriefs in Computer Science
eBook Typ:
PDF
eBook Format:
Reflowable eBook
Kopierschutz:
Digital Watermark [Social-DRM]
Sprache:
Englisch
Beschreibung:

This SpringerBrief describes how to build a rigorous end-to-end mathematical framework for deep neural networks. The authors provide tools to represent and describe neural networks, casting previous results in the field in a more natural light. In particular, the authors derive gradient descent algorithms in a unified way for several neural network structures, including multilayer perceptrons, convolutional neural networks, deep autoencoders and recurrent neural networks. Furthermore, the authors developed framework is both more concise and mathematically intuitive than previous representations of neural networks.This SpringerBrief is one step towards unlocking the black box of Deep Learning. The authors believe that this framework will help catalyze further discoveries regarding the mathematical properties of neural networks.This SpringerBrief is accessible not only to researchers, professionals and students working and studying in the field of deep learning, but alsoto those outside of the neutral network community.

This SpringerBrief describes how to build a rigorous end-to-end mathematical framework for deep neural networks. The authors provide tools to represent and describe neural networks, casting previous results in the field in a more natural light. In particular, the authors derive gradient descent algorithms in a unified way for several neural network structures, including multilayer perceptrons, convolutional neural networks, deep autoencoders and recurrent neural networks. Furthermore, the authors developed framework is both more concise and mathematically intuitive than previous representations of neural networks.

This SpringerBrief is one step towards unlocking the black box of Deep Learning. The authors believe that this framework will help catalyze further discoveries regarding the mathematical properties of neural networks.This SpringerBrief is accessible not only to researchers, professionals and students working and studying in the field of deep learning, but alsoto those outside of the neutral network community.

Kunden Rezensionen

Zu diesem Artikel ist noch keine Rezension vorhanden.
Helfen sie anderen Besuchern und verfassen Sie selbst eine Rezension.