Real time effects in volume holographic materials for optical storage, copying, and optical neural networks. - Page 268 |
Save page Remove page | Previous | 268 of 321 | Next |
|
small (250x250 max)
medium (500x500 max)
Large (1000x1000 max)
Extra Large
large ( > 500x500)
Full Resolution
All (PDF)
|
This page
All
|
specific neuron on the layer /./(* ) is the neuron transfer function, b‘. is a bias value, and finally p. is the input potential. The input potential is the algebraic sum of the weighted output signals of the (connected) neurons of the / -1 previous layer as p! = Z v ! ' ' (7--) ] in which n -y is the weight of the axon linking the output signal x ~ ' {i.e. neuron j on the layer /-1 ) to our neuron i on the layer /. These considerations can be extended for feed-forward networks with an undefined number of layers. For modelling the neuron response, we will assume here that the nonlinear transfer function is implemented by a sigmoid, which therefore suggests to rewrite Eq. 7-1 as ( *!+p: (7-3) where is a constant, and the firing of the output signal is limited in the range [0 1]. Similar to its biological model, the knowledge of a neural network is embedded in the values of its weights. By properly changing (or better adapting) the values of its weights, a network can be trained to solve specified tasks or problems. The learning rule is the algorithm used to update the values of the weights to converge (when it is possible) to a desired problem solution (when it exists). During an iterative learning process, a generalized learning rule can be 233 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
Object Description
Description
Title | Real time effects in volume holographic materials for optical storage, copying, and optical neural networks. - Page 268 |
Repository email | cisadmin@lib.usc.edu |
Full text | specific neuron on the layer /./(* ) is the neuron transfer function, b‘. is a bias value, and finally p. is the input potential. The input potential is the algebraic sum of the weighted output signals of the (connected) neurons of the / -1 previous layer as p! = Z v ! ' ' (7--) ] in which n -y is the weight of the axon linking the output signal x ~ ' {i.e. neuron j on the layer /-1 ) to our neuron i on the layer /. These considerations can be extended for feed-forward networks with an undefined number of layers. For modelling the neuron response, we will assume here that the nonlinear transfer function is implemented by a sigmoid, which therefore suggests to rewrite Eq. 7-1 as ( *!+p: (7-3) where is a constant, and the firing of the output signal is limited in the range [0 1]. Similar to its biological model, the knowledge of a neural network is embedded in the values of its weights. By properly changing (or better adapting) the values of its weights, a network can be trained to solve specified tasks or problems. The learning rule is the algorithm used to update the values of the weights to converge (when it is possible) to a desired problem solution (when it exists). During an iterative learning process, a generalized learning rule can be 233 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. |