您的位置:首页 > 其它

UFLDL教程Exercise答案(5):Self-Taught Learning

2016-11-21 13:14 357 查看
教程地址:http://deeplearning.stanford.edu/wiki/index.php/UFLDL_Tutorial

Exercise地址:http://deeplearning.stanford.edu/wiki/index.php/Exercise:Self-Taught_Learning

代码

Step 1: Generate the input and test data sets ——代码已给

0-4作为labeledData;5-9作为unlabeledData;

有标签数据(labeledData)分为两部分,一部分作为trainData,一部分作为testData;

无标签数据(unlabeledData)作为autoencoder的输入数据训练autoencoder;

有标签数据的trainData作为训练好的autoencoder输入,得到第二层的激活值trainFeatures;

有标签数据的testData作为训练好的autoencoder输入,得到第二层的激活值testFeatures;

用trainFeatures训练softmax分类器;

将testFeatures作为训练好的softmax分类器的输入,预测其类别。

Step 2: Train the sparse autoencoder ——stlExercise.m

用unlabeled data (the digits from 5 to 9) 训练sparse autoencoder 

将之前写的sparseAutoencoderCost.m放入工作路径中

%% ----------------- YOUR CODE HERE ----------------------
%  Find opttheta by running the sparse autoencoder on
%  unlabeledTrainingImages

opttheta = theta;

%  Use minFunc to minimize the function
addpath minFunc/
options.Method = 'lbfgs'; % Here, we use L-BFGS to optimize our cost
% function. Generally, for minFunc to work, you
% need a function pointer with two outputs: the
% function value and the gradient. In our problem,
% sparseAutoencoderCost.m satisfies this.
options.maxIter = 400;	  % Maximum number of iterations of L-BFGS to run
options.display = 'on';
[opttheta, cost] = minFunc( @(p) sparseAutoencoderCost(p, ...
inputSize, hiddenSize, ...
lambda, sparsityParam, ...
beta, unlabeledData), ...
theta, options);

%% -----------------------------------------------------


Step 3: Extracting features ——feedForwardAutoencoder.m 

%% ---------- YOUR CODE HERE --------------------------------------
%  Instructions: Compute the activation of the hidden layer for the Sparse Autoencoder.
b1 = repmat(b1, 1, size(data, 2));
Z1 = W1*data + b1;
activation = sigmoid(Z1);

%-------------------------------------------------------------------


Step 4: Training and testing the logistic regression model ——stlExercise.m

将之前写的softmaxCost.m放入工作路径中

%% ----------------- YOUR CODE HERE ----------------------
%  Use softmaxTrain.m from the previous exercise to train a multi-class
%  classifier.

%  Use lambda = 1e-4 for the weight regularization for softmax

% You need to compute softmaxModel using softmaxTrain on trainFeatures and
% trainLabels
lambda = 1e-4;
numClasses = numLabels;
options.maxIter = 100;
softmaxModel = softmaxTrain(hiddenSize, numClasses, lambda, ...
trainFeatures, trainLabels, options);

%% -----------------------------------------------------


Step 5: Classifying on the test set ——stlExercise.m

%% ----------------- YOUR CODE HERE ----------------------
% Compute Predictions on the test set (testFeatures) using softmaxPredict
% and softmaxModel
[pred] = softmaxPredict(softmaxModel, testFeatures);   % 对结果进行预测

%% -----------------------------------------------------
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: