Mercurial > hg > tvii
view tvii/logistic_regression.py @ 31:fa7a51df0d90
[logistic regression] test gradient descent
author | Jeff Hammel <k0scist@gmail.com> |
---|---|
date | Mon, 04 Sep 2017 12:37:45 -0700 |
parents | ae0c345ea09d |
children | 0f29b02f4806 |
line wrap: on
line source
""" z = w'x + b a = sigmoid(z) L(a,y) = -(y*log(a) + (1-y)*log(1-a)) [| | | ] X = [x1 x2 x3] [| | | ] [z1 z2 z3 .. zm] = w'*X + [b b b b ] = [w'*x1+b + w'*x2+b ...] """ import numpy as np from .sigmoid import sigmoid def propagate(w, b, X, Y): """ Implement the cost function and its gradient for the propagation: Forward Propagation: - You get X - You compute $A = \sigma(w^T X + b) = (a^{(0)}, a^{(1)}, ..., a^{(m-1)}, a^{(m)})$ - You calculate the cost function: $J = -\frac{1}{m}\sum_{i=1}^{m}y^{(i)}\log(a^{(i)})+(1-y^{(i)})\log(1-a^{(i)})$ Here are the two formulas you will be using: $$ \frac{\partial J}{\partial w} = \frac{1}{m}X(A-Y)^T\tag{7}$$ $$ \frac{\partial J}{\partial b} = \frac{1}{m} \sum_{i=1}^m (a^{(i)}-y^{(i)})\tag{8}$$ Arguments: w -- weights, a numpy array of size (num_px * num_px * 3, 1) b -- bias, a scalar X -- data of size (num_px * num_px * 3, number of examples) Y -- true "label" vector (containing 0 if non-cat, 1 if cat) of size (1, number of examples) Return: cost -- negative log-likelihood cost for logistic regression dw -- gradient of the loss with respect to w, thus same shape as w db -- gradient of the loss with respect to b, thus same shape as b Tips: - Write your code step by step for the propagation. np.log(), np.dot() """ # FORWARD PROPAGATION (FROM X TO COST) cost = cost_function(w, b, X, Y) # compute cost # BACKWARD PROPAGATION (TO FIND GRADIENT) m = X.shape[1] A = sigmoid(np.dot(w.T, X) + b) # compute activation dw = (1./m)*np.dot(X, (A - Y).T) db = (1./m)*np.sum(A - Y) # sanity check assert(A.shape[1] == m) assert(dw.shape == w.shape), "dw.shape is {}; w.shape is {}".format(dw.shape, w.shape) assert(db.dtype == float) cost = np.squeeze(cost) assert(cost.shape == ()) # return gradients grads = {"dw": dw, "db": db} return grads, cost def cost_function(w, b, X, Y): """ Cost function for binary classification yhat = sigmoid(W.T*x + b) interpret yhat thhe probably that y=1 Loss function: y log(yhat) + (1 - y) log(1 - yhat) """ m = X.shape[1] A = sigmoid(np.dot(w.T, X) + b) cost = np.sum(Y*np.log(A) + (1 - Y)*np.log(1 - A)) return (-1./m)*cost def optimize(w, b, X, Y, num_iterations, learning_rate, print_cost = False): """ This function optimizes w and b by running a gradient descent algorithm Arguments: w -- weights, a numpy array of size (num_px * num_px * 3, 1) b -- bias, a scalar X -- data of shape (num_px * num_px * 3, number of examples) Y -- true "label" vector (containing 0 if non-cat, 1 if cat), of shape (1, number of examples) num_iterations -- number of iterations of the optimization loop learning_rate -- learning rate of the gradient descent update rule print_cost -- True to print the loss every 100 steps Returns: params -- dictionary containing the weights w and bias b grads -- dictionary containing the gradients of the weights and bias with respect to the cost function costs -- list of all the costs computed during the optimization, this will be used to plot the learning curve. Tips: You basically need to write down two steps and iterate through them: 1) Calculate the cost and the gradient for the current parameters. Use propagate(). 2) Update the parameters using gradient descent rule for w and b. """ costs = [] for i in range(num_iterations): # Cost and gradient calculation grads, cost = propagate(w, b, X, Y) # Retrieve derivatives from grads dw = grads["dw"] db = grads["db"] # gradient descent w = w - learning_rate*dw b = b - learning_rate*db # Record the costs if i % 100 == 0: costs.append(cost) # Print the cost every 100 training examples if print_cost and not (i % 100): print ("Cost after iteration %i: %f" %(i, cost)) # package data for return params = {"w": w, "b": b} grads = {"dw": dw, "db": db} return params, grads, costs