Pasted image 20231106123812.png

  1. Weight Update Rule: This equation is updating the weight based on its previous value and the change in weight.     vi(t)=vi(t1)+Δvi(t)v_i(t) = v_i(t-1) + \Delta v_i(t)    -  vi(t)v_i(t) is the weight of the connection from input unit ii to the neural network at time  tt.    -  tt is the current iteration or time step.    -  Δvi(t)\Delta v_i(t) is the change in the weight at time tt.

  2. Change in Weight ( Δvi(t)\Delta v_i(t)): vi(t)=η(Evi)v_i(t) = \eta \left(-\frac{\partial \mathcal{E}}{\partial v_i}\right)    -  Δvi(t)\Delta v_i(t) is the change in weight at time tt.    -  η\eta is the learning rate, controlling the size of the weight updates.    -  Evi-\frac{\partial \mathcal{E}}{\partial v_i} is the negative gradient of the error (E\mathcal{E}) with respect to the weight viv_i.    - This term indicates the direction and magnitude of the change needed to minimize the error.

  3. Gradient Calculation: Evi=2(tpop)f net pzi,p\frac{\partial \mathcal{E}}{\partial v_i} = -2\left(t_p-o_p\right) \frac{\partial f}{\partial \text { net }_p} z_{i, p}    -  Evi\frac{\partial \mathcal{E}}{\partial v_i} is the gradient of the error with respect to the weight  viv_i.    -  tpt_p is the target output, opo_p is the actual output, and  E\mathcal{E} is the error.    -  f net p\frac{\partial f}{\partial \text { net }_p} is the derivative of the activation function ff with respect to the net input to the output unit.    -  zi,pz_{i, p} is the input from unit ii for pattern pp.

© 2024 All rights reserved

Built with DataHub LogoDataHub Cloud