*Product availability is subject to suppliers inventory
Book Synopsis
Fluid and authoritative, this well-organized book represents the first comprehensive treatment of neural networks and learning machines from an engineering perspective, providing extensive, state-of-the-art coverage that will expose readers to the myriad facets of neural networks and help them appreciate the technology's origin, capabilities, and potential applications. KEY TOPICS: Examines all the important aspects of this emerging technology, covering the learning process, back propogation, radial basis functions, recurrent networks, self-organizing systems, modular networks, temporal processing, neurodynamics, and VLSI implementation. Integrates computer experiments throughout to demonstrate how neural networks are designed and perform in practice. Chapter objectives, problems, worked examples, a bibliography, photographs, illustrations, and a thorough glossary all reinforce concepts throughout. New chapters delve into such areas as support vector machines, and reinforcement learning/neurodynamic programming, Rosenblatt's Perceptron, Least-Mean-Square Algorithm, Regularization Theory, Kernel Methods and Radial-Basis function networks (RBF), and Bayseian Filtering for State Estimation of Dynamic Systems. An entire chapter of case studies illustrates the real-life, practical applications of neural networks. A highly detailed bibliography is included for easy reference. MARKET: For professional engineers and research scientists. Matlab codes used for the computer experiments in the text are available for download at: http: //www.pearsonhighered.com/haykin/From the Back Cover
Neural Networks and Learning Machines Third Edition Simon Haykin McMaster University, Canada This third edition of a classic book presents a comprehensive treatment of neural networks and learning machines. These two pillars that are closely related. The book has been revised extensively to provide an up-to-date treatment of a subject that is continually growing in importance. Distinctive features of the book include: - On-line learning algorithms rooted in stochastic gradient descent; small-scale and large-scale learning problems. - Kernel methods, including support vector machines, and the representer theorem. - Information-theoretic learning models, including copulas, independent components analysis (ICA), coherent ICA, and information bottleneck. - Stochastic dynamic programming, including approximate and neurodynamic procedures. - Sequential state-estimation algorithms, including Kalman and particle filters. - Recurrent neural networks trained using sequential-state estimation algorithms. - Insightful computer-oriented experiments. Just as importantly, the book is written in a readable style that is Simon Haykin's hallmark.Estimated Delivery | Cost | Tracking | Courier |
---|