UFLDL教程Exercise答案(5):Self-Taught Learning
2016-11-21 13:14
357 查看
教程地址:http://deeplearning.stanford.edu/wiki/index.php/UFLDL_Tutorial
Exercise地址:http://deeplearning.stanford.edu/wiki/index.php/Exercise:Self-Taught_Learning
有标签数据(labeledData)分为两部分,一部分作为trainData,一部分作为testData;
无标签数据(unlabeledData)作为autoencoder的输入数据训练autoencoder;
有标签数据的trainData作为训练好的autoencoder输入,得到第二层的激活值trainFeatures;
有标签数据的testData作为训练好的autoencoder输入,得到第二层的激活值testFeatures;
用trainFeatures训练softmax分类器;
将testFeatures作为训练好的softmax分类器的输入,预测其类别。
将之前写的sparseAutoencoderCost.m放入工作路径中
Exercise地址:http://deeplearning.stanford.edu/wiki/index.php/Exercise:Self-Taught_Learning
代码
Step 1: Generate the input and test data sets ——代码已给
0-4作为labeledData;5-9作为unlabeledData;有标签数据(labeledData)分为两部分,一部分作为trainData,一部分作为testData;
无标签数据(unlabeledData)作为autoencoder的输入数据训练autoencoder;
有标签数据的trainData作为训练好的autoencoder输入,得到第二层的激活值trainFeatures;
有标签数据的testData作为训练好的autoencoder输入,得到第二层的激活值testFeatures;
用trainFeatures训练softmax分类器;
将testFeatures作为训练好的softmax分类器的输入,预测其类别。
Step 2: Train the sparse autoencoder ——stlExercise.m
用unlabeled data (the digits from 5 to 9) 训练sparse autoencoder将之前写的sparseAutoencoderCost.m放入工作路径中
%% ----------------- YOUR CODE HERE ---------------------- % Find opttheta by running the sparse autoencoder on % unlabeledTrainingImages opttheta = theta; % Use minFunc to minimize the function addpath minFunc/ options.Method = 'lbfgs'; % Here, we use L-BFGS to optimize our cost % function. Generally, for minFunc to work, you % need a function pointer with two outputs: the % function value and the gradient. In our problem, % sparseAutoencoderCost.m satisfies this. options.maxIter = 400; % Maximum number of iterations of L-BFGS to run options.display = 'on'; [opttheta, cost] = minFunc( @(p) sparseAutoencoderCost(p, ... inputSize, hiddenSize, ... lambda, sparsityParam, ... beta, unlabeledData), ... theta, options); %% -----------------------------------------------------
Step 3: Extracting features ——feedForwardAutoencoder.m
%% ---------- YOUR CODE HERE -------------------------------------- % Instructions: Compute the activation of the hidden layer for the Sparse Autoencoder. b1 = repmat(b1, 1, size(data, 2)); Z1 = W1*data + b1; activation = sigmoid(Z1); %-------------------------------------------------------------------
Step 4: Training and testing the logistic regression model ——stlExercise.m
将之前写的softmaxCost.m放入工作路径中%% ----------------- YOUR CODE HERE ---------------------- % Use softmaxTrain.m from the previous exercise to train a multi-class % classifier. % Use lambda = 1e-4 for the weight regularization for softmax % You need to compute softmaxModel using softmaxTrain on trainFeatures and % trainLabels lambda = 1e-4; numClasses = numLabels;
options.maxIter = 100; softmaxModel = softmaxTrain(hiddenSize, numClasses, lambda, ... trainFeatures, trainLabels, options); %% -----------------------------------------------------
Step 5: Classifying on the test set ——stlExercise.m
%% ----------------- YOUR CODE HERE ---------------------- % Compute Predictions on the test set (testFeatures) using softmaxPredict % and softmaxModel [pred] = softmaxPredict(softmaxModel, testFeatures); % 对结果进行预测 %% -----------------------------------------------------
相关文章推荐
- UFLDL教程答案(5):Exercise:Self-Taught Learning
- UFLDL教程: Exercise:Self-Taught Learning
- Deep Learning 7_深度学习UFLDL教程:Self-Taught Learning_Exercise(斯坦福大学深度学习教程)
- UFLDL教程Exercise答案(3.2):PCA and Whitening
- UFLDL教程答案(2):Exercise:Vectorization
- UFLDL教程答案(4):Exercise:Softmax Regression
- UFLDL教程(五)之self-taught learning
- UFLDL教程答案(3):Exercise:PCA_in_2D&PCA_and_Whitening
- UFLDL——Exercise:Self-Taught Learning 自我学习
- UFLDL教程 Exercise:Vectorization(答案)
- UFLDL教程(五)之self-taught learning
- UFLDL Exercise: Self-Taught Learning
- UFLDL教程答案(6):Exercise:Implement deep networks for digit classification
- UFLDL教程答案(8):Exercise:Convolution and Pooling
- UFLDL教程练习(exercise)答案(2)
- UFLDL教程练习(exercise)答案(1)
- UFLDL教程答案(1):Exercise:Sparse_Autoencoder
- Ufldl Exercise:Self-Taught Learning
- UFLDL教程Exercise答案(1):Sparse Autoencoder
- UFLDL教程练习(exercise)答案(1)