您的位置:首页 > 编程语言

Ng机器学习课程Notes学习及编程实战系列-Part 2 Logistic Regression

2014-01-20 22:28 459 查看
编者按:本系列系统总结Ng机器学习课程(http://cs229.stanford.edu/materials.html) Notes理论要点,并且给出所有课程exercise的作业code和实验结果分析。”游泳是游会的“,希望通过这个系列可以深刻理解机器学习算法,并且自己动手写出work高效的机器学习算法code应用到真实数据集做实验,理论和实战兼备。
Part 2 Logistic Regression

1 Logistic Regression 的hypotheses函数
在Linear Regression中,如果我们假设待预测的变量y是离散的一些值,那么这就是分类问题。如果y只能取0或1,这就是binary classification的问题。我们仍然可考虑用Regression的方法来解决binary classification的问题。但是此时,由于我们已经知道y \in {0,1},而不是整个实数域R,我们就应该修改hypotheses函数h_\theta(x)的形式,可以使用Logistic Function将任意实数映射到[0,1]的区间内。即



其中我们对所有feature先进行线性组合,即\theta' * x = \theta_0 * x_0 + \theta_1 * x_1 +\theta_2 * x_2 ..., 然后把线性组合后的值代入Logistic Function(又叫sigmoid function)映射成[0,1]内的某个值。Logistic Function的图像如下



当z->正无穷大时,函数值->1;当z->负无穷大时,函数值->0.因此新的hypotheses函数h_\theta(x)总是在[0,1]这个区间内。我们同样增加一个feature x_0 = 1以方便向量表示。Logistic Function的导数可以用原函数来表示,即



这个结论在后面学习参数\theta的时候还会使用到。

2 用最大似然估计和梯度上升法学习Logistic Regression的模型参数\theta
给定新的hypotheses函数h_\theta(x),我们如何根据训练样本来学习参数\theta呢?我们可以考虑从概率假设的角度使用最大似然估计MLE来fit data(MLE等价于LMS算法中的最小化cost function)。我们假设:



即用hypotheses函数h_\theta(x)来表示y=1的概率; 1-h_\theta(x)来表示y=0的概率.这个概率假设可以写成如下更紧凑的形式



假设我们观察到了m个训练样本,它们的生成过程独立同分布,那么我们可以写出似然函数



取对数后变成log-likelihood



我们现在要最大化log-likelihood求参数\theta. 换一种角度理解,就是此时cost function J = - l(\theta),我们需要最小化cost function 即- l(\theta)。
类似于我们在学习Linear Regression参数时用梯度下降法,这里我们可以采用梯度上升法最大化log-likelihood,假设我们只有一个训练样本(x,y),那么可以得到SGA(增量梯度上升)的update rule



里面用到了logistic function的导数的性质 即 g' = g(1-g).于是我们可以得到参数更新rule



这里是不断的加上一个量,因为是梯度上升。\alpha是learning rate. 从形式上看和Linear Regression的参数 LMS update rule是一样的,但是实质是不同的,因此假设的模型函数h_\theta(x)不同。在Linear Regression中只是所有feature的线性组合;在Logistic Regression中是先把所有feature线性组合,然后在带入Logistic Function映射到区间[0,1]内,即此时h_\theta(x)就不再是一个线性函数。其实这两种算法都是Generalized Linear Models的特例。
另外也可以考虑用牛顿迭代法来求参数更新的update rule。牛顿迭代法是一种求方程f(\theta) = 0的根的方法,即函数f(\theta)与x轴的交点坐标值。从某个初始\theta开始,按照下式不断迭代更新\theta,会发现\theta的值越来越逼近真实的方程的根,即更新rule是



这样我们可以用数值解法求方程的根。更多关于牛顿迭代法的讲解可以参考维基百科 http://en.wikipedia.org/wiki/Newton%27s_method,下面这张图解释非常形象,来自维基百科。


应用到求Logistic Regression的参数\theta的更新rule中就是,我们要求l(\theta)的一阶导数等于0得到的方程的根,即l‘(\theta) = 0。根据牛顿法,需要按照下面的rule来更新参数\theta,



而\theta是n维向量(每一维对应一个feature),针对向量求导,上面的式子就变成了



H是Hessian矩阵,相当于二阶偏导数,是一个n*n的矩阵,元素的(i,j)的计算方法如下



后面是l(\theta)对\theta_j的偏导数。
牛顿法通常可以比batch gradient descent方法更快收敛,经过更少次数的迭代就可以接近cost function最小的参数值。但是牛顿法的每一次迭代的计算量更大,因为需要对n*n的Hessian矩阵求逆矩阵。但是只要n不是太大,牛顿法都可以更快的收敛。当我们用牛顿法来最大化Logistic Regression的log likelihood,这种方法也叫Fisher scoring方法。

3 编程实战
(注:本部分编程习题全部来自Andrew Ng机器学习网上公开课)
3.1 Logistic Regression的Matlab实现
假定我们是大学录取委员会的成员,给定学生的两门课程成绩和是否录取的历史记录,需要对新的学生是否应该录取做binary classification。所以每个学生用两个feature来描述,分别对应两门课程成绩。现在根据训练样本trian一个 Logistic Regression(decision boundary)model,然后对新的学生测试样本做分类。主程序如下:
%% Initialization
clear ; close all; clc

%% Load Data
%  The first two columns contains the exam scores and the third column
%  contains the label.

data = load('ex2data1.txt');
X = data(:, [1, 2]); y = data(:, 3);

%% ==================== Part 1: Plotting ====================
%  We start the exercise by first plotting the data to understand the 
%  the problem we are working with.

fprintf(['Plotting data with + indicating (y = 1) examples and o ' ...
         'indicating (y = 0) examples.\n']);

plotData(X, y);

% Put some labels 
hold on;
% Labels and Legend
xlabel('Exam 1 score')
ylabel('Exam 2 score')

% Specified in plot order
legend('Admitted', 'Not admitted')
hold off;

fprintf('\nProgram paused. Press enter to continue.\n');
pause;

%% ============ Part 2: Compute Cost and Gradient ============
%  In this part of the exercise, you will implement the cost and gradient
%  for logistic regression. You neeed to complete the code in 
%  costFunction.m

%  Setup the data matrix appropriately, and add ones for the intercept term
[m, n] = size(X);

% Add intercept term to x and X_test
X = [ones(m, 1) X];

% Initialize fitting parameters
initial_theta = zeros(n + 1, 1);

% Compute and display initial cost and gradient
[cost, grad] = costFunction(initial_theta, X, y);

fprintf('Cost at initial theta (zeros): %f\n', cost);
fprintf('Gradient at initial theta (zeros): \n');
fprintf(' %f \n', grad);

fprintf('\nProgram paused. Press enter to continue.\n');
pause;

%% ============= Part 3: Optimizing using fminunc  =============
%  In this exercise, you will use a built-in function (fminunc) to find the
%  optimal parameters theta.

%  Set options for fminunc
options = optimset('GradObj', 'on', 'MaxIter', 400);

%  Run fminunc to obtain the optimal theta
%  This function will return theta and the cost 
[theta, cost] = ...
	fminunc(@(t)(costFunction(t, X, y)), initial_theta, options);

% Print theta to screen
fprintf('Cost at theta found by fminunc: %f\n', cost);
fprintf('theta: \n');
fprintf(' %f \n', theta);

% Plot Boundary
plotDecisionBoundary(theta, X, y);

% Put some labels 
hold on;
% Labels and Legend
xlabel('Exam 1 score')
ylabel('Exam 2 score')

% Specified in plot order
legend('Admitted', 'Not admitted')
hold off;

fprintf('\nProgram paused. Press enter to continue.\n');
pause;

%% ============== Part 4: Predict and Accuracies ==============
%  After learning the parameters, you'll like to use it to predict the outcomes
%  on unseen data. In this part, you will use the logistic regression model
%  to predict the probability that a student with score 45 on exam 1 and 
%  score 85 on exam 2 will be admitted.
%
%  Furthermore, you will compute the training and test set accuracies of 
%  our model.
%
%  Your task is to complete the code in predict.m

%  Predict probability for a student with score 45 on exam 1 
%  and score 85 on exam 2 

prob = sigmoid([1 45 85] * theta);
fprintf(['For a student with scores 45 and 85, we predict an admission ' ...
         'probability of %f\n\n'], prob);

% Compute accuracy on our training set
p = predict(theta, X);

fprintf('Train Accuracy: %f\n', mean(double(p == y)) * 100);

fprintf('\nProgram paused. Press enter to continue.\n');
pause;

首先可以在feature(x_1,x_2)平面上visualize出训练数据集, 对正实例和负实例用不同记号表示如下



图中横纵坐标对应两门课程成绩,两类点分别对应录取的学生和不录取的学生。 画图的code如下
function plotData(X, y)
%PLOTDATA Plots the data points X and y into a new figure 
%   PLOTDATA(x,y) plots the data points with + for the positive examples
%   and o for the negative examples. X is assumed to be a Mx2 matrix.

% Create New Figure
figure; hold on;
% ====================== YOUR CODE HERE ======================
% Instructions: Plot the positive and negative examples on a
%               2D plot, using the option 'k+' for the positive
%               examples and 'ko' for the negative examples.

% find all the indices of postive and negtive training example
pos = find(y == 1); neg = find(y == 0); 
plot(X(pos,1), X(pos,2), 'k+', 'LineWidth', 2, 'MarkerSize', 7);
plot(X(neg,1), X(neg,2), 'ko', 'LineWidth', 2, 'MarkerSize', 7, 'MarkerFaceColor', 'y');
% =========================================================================
hold off;

end

然后可以实现sigmoid函数如下,用于将feature的线性组合或者非线性组合映射到[0.1]之间



function g = sigmoid(z)
%SIGMOID Compute sigmoid functoon
%   J = SIGMOID(z) computes the sigmoid of z.

% You need to return the following variables correctly 
g = zeros(size(z));

% ====================== YOUR CODE HERE ======================
% Instructions: Compute the sigmoid of each value of z (z can be a matrix,
%               vector or scalar).
g = 1.0 ./ (1.0 + exp(-z));
% =============================================================

end

sigmoid函数中z与g(z)是正相关的,z越大,g(z)越接近1,反之g(z)越接近0.

下面我们可以实现Logistic Regression的代价函数(cost function) 和梯度函数(gradient),分别如下





cost function可以认为是生成训练数据的negative log likelihood, 最小化cost function等价于最大似然估计MLE。下面梯度函数的推导过程详见本文第二部分。这里用的是batch gradient descent,即每更新一个theta_j都需要扫描所有m个训练样本。代码实现如下
function [J, grad] = costFunction(theta, X, y)
%COSTFUNCTION Compute cost and gradient for logistic regression
%   J = COSTFUNCTION(theta, X, y) computes the cost of using theta as the
%   parameter for logistic regression and the gradient of the cost
%   w.r.t. to the parameters.

% Initialize some useful values
m = length(y); % number of training examples

% You need to return the following variables correctly 
% ====================== YOUR CODE HERE ======================
% Instructions: Compute the cost of a particular choice of theta.
%               You should set J to the cost.
%               Compute the partial derivatives and set grad to the partial
%               derivatives of the cost w.r.t. each parameter in theta
%
% Note: grad should have the same dimensions as theta
% define cost function and gradient 
% use fminunc function to solve the minimul value
 hx = sigmoid(X * theta);
 J = (1.0/m) * sum(-y .* log(hx) - (1.0 - y) .* log(1.0 - hx));
 grad = (1.0/m) .* X' * (hx - y);
 
% =============================================================

end
可以求出初始情况下(initial_theta = zeros(n + 1, 1))的cost function 和gradient。
Cost at initial theta (zeros): 0.693147
Gradient at initial theta (zeros): 
 -0.100000 
 -12.009217 
 -11.262842 

Program paused. Press enter to continue.

然后用Matlab内置的fminunc函数来求解cost function的最小值和对应的参数值\theta.这是一个无约束的优化问题,可以用fminunc函数求解,不必自己实现gradient descent(自己实现也很容易但是用这个函数更方便)。查阅下doc,其支持如下的输入输出
[x,fval] = fminunc(fun,x0,options)

input 中 fun是定义待优化的cost func和gradient(可选)的函数,x0是自变量初始值,options是选项设置,在Logistic Regression的实现中相关语句是
%  Set options for fminunc
options = optimset('GradObj', 'on', 'MaxIter', 400);

%  Run fminunc to obtain the optimal theta
%  This function will return theta and the cost 
[theta, cost] = fminunc(@(t)(costFunction(t, X, y)), initial_theta, options);

optimset提供了两个参数选项分别是是否提供gradient和迭代次数。下面调用fminunc是第一个参数@(t)是一个匿名函数的函数句柄,后面costFunction(t, X, y)是函数定义,具体实现在对应的.m函数文件中。
output中是最优参数值x以及最优cost function的值。程序输出如下
Local minimum possible.

fminunc stopped because the final change in function value relative to 
its initial value is less than the default value of the function tolerance.

<stopping criteria details>

Cost at theta found by fminunc: 0.203506
theta: 
 -24.932905 
 0.204407 
 0.199617
这样就找到了最优的参数theta值和对应的cost function值。将最优参数值对应的decision boundary画出来就是



下面可以对新的学生测试样本做预测。并且,可以利用decision对所有训练样本中的学生的录取结果做预测,然后对比实际录取情况计算train accuracy 。给定一个学生的两门课程成绩的feature,我们可以得到其被录取的概率即g(\theta' * x),然后大于等于0.5的认为应该录取,否则不录取。实现如下

function p = predict(theta, X)
%PREDICT Predict whether the label is 0 or 1 using learned logistic 
%regression parameters theta
%   p = PREDICT(theta, X) computes the predictions for X using a 
%   threshold at 0.5 (i.e., if sigmoid(theta'*x) >= 0.5, predict 1)

m = size(X, 1); % Number of training examples

% You need to return the following variables correctly
p = zeros(m, 1);

% ====================== YOUR CODE HERE ======================
% Instructions: Complete the following code to make predictions using
%               your learned logistic regression parameters. 
%               You should set p to a vector of 0's and 1's
%

p = sigmoid(X* theta);
index_1 = find(p >= 0.5);
index_0 = find(p < 0.5);

p(index_1) = ones(size(index_1));
p(index_0) = zeros(size(index_0));

% =========================================================================
end
程序输出如下
For a student with scores 45 and 85, we predict an admission probability of 0.774322

Train Accuracy: 89.000000
可以知道对于取得45和85的学生录取的概率是0.774322,train出的model对89%的训练样本的预测结果是正确的。

3.2 Regularized Logistic Regression的Matlab实现
现在我们换一个数据集,这个数据集的特征是训练样本不再线性可分。比如某公司生产的芯片需要经过两次质量检查,然后质检员根据两次质量检查的结果决定是否通过质检。给定一些历史的质检结果和是否通过质检的决策结果,需要我们对新的测试样本是否可以通过质检做预测。这个问题中,芯片也是用两个feature来描述,即两次检查的结果。我们同样可以基于训练数据train 一个Logistic Regression model,然后根据model对测试样本做预测。主程序如下
%% Initialization
clear ; close all; clc

%% Load Data
%  The first two columns contains the X values and the third column
%  contains the label (y).

data = load('ex2data2.txt');
X = data(:, [1, 2]); y = data(:, 3);

plotData(X, y);

% Put some labels 
hold on;

% Labels and Legend
xlabel('Microchip Test 1')
ylabel('Microchip Test 2')

% Specified in plot order
legend('y = 1', 'y = 0')
hold off;

%% =========== Part 1: Regularized Logistic Regression ============
%  In this part, you are given a dataset with data points that are not
%  linearly separable. However, you would still like to use logistic 
%  regression to classify the data points. 
%
%  To do so, you introduce more features to use -- in particular, you add
%  polynomial features to our data matrix (similar to polynomial
%  regression).
%

% Add Polynomial Features

% Note that mapFeature also adds a column of ones for us, so the intercept
% term is handled
X = mapFeature(X(:,1), X(:,2));

% Initialize fitting parameters
initial_theta = zeros(size(X, 2), 1);

% Set regularization parameter lambda to 1
lambda = 1;

% Compute and display initial cost and gradient for regularized logistic
% regression
[cost, grad] = costFunctionReg(initial_theta, X, y, lambda);

fprintf('Cost at initial theta (zeros): %f\n', cost);

fprintf('\nProgram paused. Press enter to continue.\n');
pause;

%% ============= Part 2: Regularization and Accuracies =============
%  Optional Exercise:
%  In this part, you will get to try different values of lambda and 
%  see how regularization affects the decision coundart
%
%  Try the following values of lambda (0, 1, 10, 100).
%
%  How does the decision boundary change when you vary lambda? How does
%  the training set accuracy vary?
%

% Initialize fitting parameters
initial_theta = zeros(size(X, 2), 1);

% Set regularization parameter lambda to 1 (you should vary this)
lambda = 100;

% Set Options
options = optimset('GradObj', 'on', 'MaxIter', 400);

% Optimize
[theta, J, exit_flag] = ...
	fminunc(@(t)(costFunctionReg(t, X, y, lambda)), initial_theta, options);

% Plot Boundary
plotDecisionBoundary(theta, X, y);
hold on;
title(sprintf('lambda = %g', lambda))

% Labels and Legend
xlabel('Microchip Test 1')
ylabel('Microchip Test 2')

legend('y = 1', 'y = 0', 'Decision boundary')
hold off;

% Compute accuracy on our training set
p = predict(theta, X);

fprintf('Train Accuracy: %f\n', mean(double(p == y)) * 100);
首先同样是visualize 数据如下



可以看出此时两类样本不再线性可分。即找不到一条直线很好的区分正样本和负样本,如果直接应用Logistic Regression,效果肯定会不好,得出的cost function最优值会比较大。此时,就需要增加feature dimension, 参数\theta的dimension也同样增加,更多的参数可以描述更丰富的样本信息。首先做feature mapping,比如基于x1和x2生成阶为6以下的所有多项式组合,一共有28项(1+2+3+ ... + 7 = 28)如下



实现如下
function out = mapFeature(X1, X2)
% MAPFEATURE Feature mapping function to polynomial features
%
%   MAPFEATURE(X1, X2) maps the two input features
%   to quadratic features used in the regularization exercise.
%
%   Returns a new feature array with more features, comprising of 
%   X1, X2, X1.^2, X2.^2, X1*X2, X1*X2.^2, etc..
%
%   Inputs X1, X2 must be the same size
%

degree = 6;
out = ones(size(X1(:,1)));
for i = 1:degree
    for j = 0:i
        out(:, end+1) = (X1.^(i-j)).*(X2.^j);
    end
end

end

这样通过feature mapping,原始2维feature被映射到的新的28维feature空间,相应的\theta也变成了29维(包括theta_0).在这样更高维的feature vector上面训练出来的Logistic Regression Classifier可以有更复杂的decision boundary,不再是一条直线。然而,更多的参数更复杂的decision boundary也容易造成model over-fitting,即过拟合,造成model的泛化能力差。因此我们需要在cost function中引入 regularization term正则项来解决over fitting的问题。
带regularization term的Logistic Regression的cost function 定义如下



我们在原始cost function后面多加了一项,即参数\theta_j的平方和除以2m,参数lambda控制regularization term的权重。lambda越大,regularization term权重越大,越容易under fit;反之,regularization term权重越小,越容易over fit.但是我们不应该对参数\theta_0进行regularization。cost function的梯度是一个n+1维向量(含j_0),每一维的计算公式是



可以发现仅仅是对j>=1的情况后面多加了一项regularization term对\theta_j求导的结果。因此带regularization的Logistic Regression的cost function和gradient 实现如下
function [J, grad] = costFunctionReg(theta, X, y, lambda)
%COSTFUNCTIONREG Compute cost and gradient for logistic regression with regularization
%   J = COSTFUNCTIONREG(theta, X, y, lambda) computes the cost of using
%   theta as the parameter for regularized logistic regression and the
%   gradient of the cost w.r.t. to the parameters. 

% Initialize some useful values
m = length(y); % number of training examples

% You need to return the following variables correctly 
J = 0;
grad = zeros(size(theta));

% ====================== YOUR CODE HERE ======================
% Instructions: Compute the cost of a particular choice of theta.
%               You should set J to the cost.
%               Compute the partial derivatives and set grad to the partial
%               derivatives of the cost w.r.t. each parameter in theta
 hx = sigmoid(X * theta);
 J = (1.0/m) * sum(-y .* log(hx) - (1.0 - y) .* log(1.0 - hx)) + lambda / (2 * m) * norm(theta([2:end]))^2;
 
 reg = (lambda/m) .* theta;
 reg(1) = 0;
 grad = (1.0/m) .* X' * (hx - y) + reg;
% =============================================================

end

其中norm是对向量theta的后面n维求模,用于计算cost function 中的regularization term。cost function的minimize求解仍然使用fminunc函数,与3.1部分一样。当lambda = 1时,得到的decision boundary如下



Train Accuracy是 83.050847。现在我们把lambda调小,比如设为0.0001,也就是减小regularization term的权重,就会发现分类器几乎可以把所有training data分类正确,但是得到一条很复杂的decision boundary,因此overfitting



Train Accuracy是86.440678但是模型的泛化能力变差。比如对于x =(0.25,1.5)的芯片会被预测为通过,这显然和traning数据表现出的特征不符合。相反,如果lambda太大,比如100,那么regularization term权重过大,model容易under fit,如下图所示,Train Accuracy只有61.016949。



因此选取合适的regularization term的weight对于得到既fit data又有良好泛化能力的model是很重要的。
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: 
相关文章推荐