Mercurial > hg > tvii
comparison tvii/logistic_regression.py @ 55:0908b6cd3217
[regression] add better cost function for sigmoids
author | Jeff Hammel <k0scist@gmail.com> |
---|---|
date | Sun, 24 Sep 2017 15:30:15 -0700 |
parents | 0807ac8992ba |
children |
comparison
equal
deleted
inserted
replaced
54:0807ac8992ba | 55:0908b6cd3217 |
---|---|
96 return (-1./m)*cost | 96 return (-1./m)*cost |
97 | 97 |
98 def compute_costs(Yhat, Y): | 98 def compute_costs(Yhat, Y): |
99 """ | 99 """ |
100 Computes the cross-entropy cost given: | 100 Computes the cross-entropy cost given: |
101 """ | 101 |
102 raise NotImplementedError('TODO') | 102 J = -(1/m)*sum_{i=0..m} (y(i)log(yhat(i)) + (1 - y(i))log(1 - yhat(i))) |
103 | |
104 Yhat -- The sigmoid output of the network | |
105 Y -- "true" label vector | |
106 """ | |
107 | |
108 # compute the cross-entropy cost | |
109 logprops = np.multiply(np.log(Yhat, Y)) + np.multiply(np.log(1-Yhat), (1-Y)) | |
110 cost = - np.sum(logprobs)/m | |
111 | |
112 cost = np.squeeze(cost) # make sure cost is the dimension we expect | |
113 assert (isinstance(cost, float)) | |
114 return cost | |
103 | 115 |
104 | 116 |
105 def optimize(w, b, X, Y, num_iterations, learning_rate, print_cost = False): | 117 def optimize(w, b, X, Y, num_iterations, learning_rate, print_cost = False): |
106 """ | 118 """ |
107 This function optimizes w and b by running a gradient descent algorithm | 119 This function optimizes w and b by running a gradient descent algorithm |