Mercurial > hg > tvii
comparison tvii/deep.py @ 80:3c7927f59b05
notes to self re deep neural networks
author | Jeff Hammel <k0scist@gmail.com> |
---|---|
date | Sun, 17 Dec 2017 13:43:42 -0800 |
parents | |
children |
comparison
equal
deleted
inserted
replaced
79:cecea2334eef | 80:3c7927f59b05 |
---|---|
1 """ | |
2 Deep neural networks | |
3 | |
4 Forward propagation for layer `l` | |
5 | |
6 Input: a[l-1] | |
7 | |
8 Output: a[l], cache(z[l] {w[l], b[1]}) | |
9 | |
10 z[l] = w[l] ... | |
11 | |
12 --- | |
13 | |
14 Backward propagation for layer `l`: | |
15 | |
16 Input: da[l] | |
17 Output: da[l-1], dW[l], db[l] | |
18 | |
19 dz[l] = da[l]* g[l]'(z[l]) | |
20 dw[l] = dz[l] a[l-1] | |
21 db[l] = dz[l] | |
22 dz[l-1] w[l].T dz[l] | |
23 | |
24 dz[l] = w[l+1].T dz[l+1] * g[l]' ( z[l] ) | |
25 | |
26 => | |
27 | |
28 dZ[l] dZ[l] * g[l]' ( Z[l] ) | |
29 | |
30 dW[l] (1/m) dZ[l] A[l-1].T | |
31 | |
32 db[l] = (1/m) np.dum(dZ[l], axis=1, keepdims=True) | |
33 dA[l-1] = W[l].T * dZ[l] | |
34 | |
35 | |
36 For the final layerL | |
37 da[l] = - (y/a) + (1 - y)/(1-a) | |
38 dA[l] = (-(y(1)/a(1) + (1 - y(1))/(1 - a(1)) # first training example | |
39 ...) | |
40 | |
41 | |
42 The weight matrix for layer `l`, W[l] is | |
43 of the shape (n[l], n[l-1]) | |
44 """ |