Theoretical Advances in Neural Computation and Learning

Theoretical Advances in Neural Computation and Learning
-0 %
Der Artikel wird am Ende des Bestellprozesses zum Download zur Verfügung gestellt.
 PDF
Sofort lieferbar | Lieferzeit: Sofort lieferbar

Unser bisheriger Preis:ORGPRICE: 160,71 €

Jetzt 160,70 €* PDF

Artikel-Nr:
9781461526964
Veröffentl:
2012
Einband:
PDF
Seiten:
468
Autor:
Alon Orlitsky
eBook Typ:
PDF
eBook Format:
PDF
Kopierschutz:
Adobe DRM [Hard-DRM]
Sprache:
Englisch
Beschreibung:

For any research field to have a lasting impact, there must be a firm theoretical foundation. Neural networks research is no exception. Some of the founda- tional concepts, established several decades ago, led to the early promise of developing machines exhibiting intelligence. The motivation for studying such machines comes from the fact that the brain is far more efficient in visual processing and speech recognition than existing computers. Undoubtedly, neu- robiological systems employ very different computational principles. The study of artificial neural networks aims at understanding these computational prin- ciples and applying them in the solutions of engineering problems. Due to the recent advances in both device technology and computational science, we are currently witnessing an explosive growth in the studies of neural networks and their applications. It may take many years before we have a complete understanding about the mechanisms of neural systems. Before this ultimate goal can be achieved, an- swers are needed to important fundamental questions such as (a) what can neu- ral networks do that traditional computing techniques cannot, (b) how does the complexity of the network for an application relate to the complexity of that problem, and (c) how much training data are required for the resulting network to learn properly? Everyone working in the field has attempted to answer these questions, but general solutions remain elusive. However, encouraging progress in studying specific neural models has been made by researchers from various disciplines.
For any research field to have a lasting impact, there must be a firm theoretical foundation. Neural networks research is no exception. Some of the founda- tional concepts, established several decades ago, led to the early promise of developing machines exhibiting intelligence. The motivation for studying such machines comes from the fact that the brain is far more efficient in visual processing and speech recognition than existing computers. Undoubtedly, neu- robiological systems employ very different computational principles. The study of artificial neural networks aims at understanding these computational prin- ciples and applying them in the solutions of engineering problems. Due to the recent advances in both device technology and computational science, we are currently witnessing an explosive growth in the studies of neural networks and their applications. It may take many years before we have a complete understanding about the mechanisms of neural systems. Before this ultimate goal can be achieved, an- swers are needed to important fundamental questions such as (a) what can neu- ral networks do that traditional computing techniques cannot, (b) how does the complexity of the network for an application relate to the complexity of that problem, and (c) how much training data are required for the resulting network to learn properly? Everyone working in the field has attempted to answer these questions, but general solutions remain elusive. However, encouraging progress in studying specific neural models has been made by researchers from various disciplines.

Kunden Rezensionen

Zu diesem Artikel ist noch keine Rezension vorhanden.
Helfen sie anderen Besuchern und verfassen Sie selbst eine Rezension.