Image for Analogue Imprecision In Mlp Training, Progress In Neural Processing, Vol 4

Analogue Imprecision In Mlp Training, Progress In Neural Processing, Vol 4

Part of the Progress in Neural Processing series
See all formats and editions

Hardware inaccuracy and imprecision are important considerations when implementing neural algorithms.

This book presents a study of synaptic weight noise as a typical fault model for analogue VLSI realisations of MLP neural networks and examines the implications for learning and network performance.

The aim of the book is to present a study of how including an imprecision model into a learning scheme as a"fault tolerance hint" can aid understanding of accuracy and precision requirements for a particular implementation.

In addition the study shows how such a scheme can give rise to significant performance enhancement.

Read More
Special order line: only available to educational & business accounts. Sign In
£71.00
Product Details
9810227396 / 9789810227395
Hardback
006.3
01/08/1996
Singapore
192 pages
Professional & Vocational Learn More