Back-propagation (delta rule)

Posted 2017.12.24 21:25

Back-propagation in neural network (aka delta rule)



The main intuition is to define a delta

$$ \delta_k^l \equiv \frac{C}{z_k^l} $$

and find a recursive backward rule

$$ \delta^l_k = \sum_m ( \delta_m^{l+1} W_{mk}^{l+1}  ) \sigma'(z^l_k). $$


The gradients w.r.t. the learning parameters can be computed from the deltas. 


Gradients of a convolution layer



The gradients of a convolution operation can be computed by convolution with the gradient of a loss function w.r.t. the activation and the rotated convolutional feature map. 


'Enginius > Machine Learning' 카테고리의 다른 글

Recent papers regarding robust learning with noisy labels  (0) 2018.03.26
Causalty란 무엇일까?  (0) 2017.12.30
Back-propagation (delta rule)  (0) 2017.12.24
Deconvolution and Checkerboard Artifacts  (0) 2017.12.22
CapsNet  (0) 2017.12.19
NIPS 2017 - Interpretable ML workshop  (0) 2017.12.19
« PREV : 1 : ··· : 18 : 19 : 20 : 21 : 22 : 23 : 24 : 25 : 26 : ··· : 613 : NEXT »