Learning Deep Architectures for AI

Learning Deep Architectures for AI
-0 %
 Paperback
Print on Demand | Lieferzeit: Print on Demand - Lieferbar innerhalb von 3-5 Werktagen I

Unser bisheriger Preis:ORGPRICE: 106,30 €

Jetzt 106,29 €* Paperback

Alle Preise inkl. MwSt. | Versandkostenfrei
Artikel-Nr:
9781601982940
Veröffentl:
2009
Einband:
Paperback
Erscheinungsdatum:
28.10.2009
Seiten:
144
Autor:
Yoshua Bengio
Gewicht:
231 g
Format:
234x156x9 mm
Sprache:
Englisch
Beschreibung:

Can machine learning deliver AI? Theoretical results, inspiration from the brain and cognition, as well as machine learning experiments suggest that in order to learn the kind of complicated functions that can represent high-level abstractions (e.g. in vision, language, and other AI-level tasks), one would need deep architectures. Deep architectures are composed of multiple levels of non-linear operations, such as in neural nets with many hidden layers, graphical models with many levels of latent variables, or in complicated propositional formulae re-using many sub-formulae. Each level of the architecture represents features at a different level of abstraction, defined as a composition of lower-level features. Searching the parameter space of deep architectures is a difficult task, but new algorithms have been discovered and a new sub-area has emerged in the machine learning community since 2006, following these discoveries. Learning algorithms such as those for Deep Belief Networks and other related unsupervised learning algorithms have recently been proposed to train deep architectures, yielding exciting results and beating the state-of-the-art in certain areas.
Learning Deep Architectures for AI discusses the motivations for and principles of learning algorithms for deep architectures. By analyzing and comparing recent results with different learning algorithms for deep architectures, explanations for their success are proposed and discussed, highlighting challenges and suggesting avenues for future explorations in this area.
1: Introduction 2: Theoretical Advantages of Deep Architectures 3: Local vs Non-Local Generalization 4: Neural Networks for Deep Architectures 5: Energy-Based Models and Boltzmann Machines 6: Greedy Layer-Wise Training of Deep Architectures 7: Variants of RBMs and Auto-Encoders 8: Stochastic Variational Bounds for Joint Optimization of DBN Layers 9: Looking forward 10: Conclusion. Acknowledgements. References.

Kunden Rezensionen

Zu diesem Artikel ist noch keine Rezension vorhanden.
Helfen sie anderen Besuchern und verfassen Sie selbst eine Rezension.