吴恩达深度学习课程deeplearning.ai课程作业:Class 4 Week 2 Residual Networks
2017-12-10 22:45
1006 查看
吴恩达deeplearning.ai课程作业,自己写的答案。
补充说明:
1. 评论中总有人问为什么直接复制这些notebook运行不了?请不要直接复制粘贴,不可能运行通过的,这个只是notebook中我们要自己写的那部分,要正确运行还需要其他py文件,请自己到GitHub上下载完整的。这里的部分仅仅是参考用的,建议还是自己按照提示一点一点写,如果实在卡住了再看答案。个人觉得这样才是正确的学习方法,况且作业也不算难。
2. 关于评论中有人说我是抄袭,注释还没别人详细,复制下来还运行不过。答复是:做伸手党之前,请先搞清这个作业是干什么的。大家都是从GitHub上下载原始的作业,然后根据代码前面的提示(通常会指定函数和公式)来编写代码,而且后面还有expected output供你比对,如果程序正确,结果一般来说是一样的。请不要无脑喷,说什么跟别人的答案一样的。说到底,我们要做的就是,看他的文字部分,根据提示在代码中加入部分自己的代码。我们自己要写的部分只有那么一小部分代码。
3. 由于实在很反感无脑喷子,故禁止了下面的评论功能,请见谅。如果有问题,请私信我,在力所能及的范围内会尽量帮忙。
由于这部分的作业后面要自己训练一个ResNet-50的网络,训练耗时较长。如果是CPU模式,一个epoch大约100s,但在我的GPU服务器上,一次epoch大约5s。不使用GPU训练,实在是耗时太长,没有GPU的话建议还是先用训练好的模型。我训练好的几个模型,下面给出百度云链接。
resnet50_20_epochs.h5 链接:https://pan.baidu.com/s/1eROf3BO 密码:qed2
resnet50_30_epochs.h5 链接:https://pan.baidu.com/s/1o8kPNUM 密码:tqio
resnet50_44_epochs.h5 链接:https://pan.baidu.com/s/1c1N3AzI 密码:2xwu
resnet50_55_epochs.h5 链接:https://pan.baidu.com/s/1bpfMA0v 密码:cxcv
Coursera上提供的模型文件:
ResNet50.h5 链接:链接:https://pan.baidu.com/s/1boCG2Iz 密码:sefq
In this assignment, you will:
- Implement the basic building blocks of ResNets.
- Put together these building blocks to implement and train a state-of-the-art neural network for image classification.
This assignment will be done in Keras.
Before jumping into the problem, let’s run the cell below to load the required packages.
The main benefit of a very deep network is that it can represent very complex functions. It can also learn features at many different levels of abstraction, from edges (at the lower layers) to very complex features (at the deeper layers). However, using a deeper network doesn’t always help. A huge barrier to training them is vanishing gradients: very deep networks often have a gradient signal that goes to zero quickly, thus making gradient descent unbearably slow. More specifically, during gradient descent, as you backprop from the final layer back to the first layer, you are multiplying by the weight matrix on each step, and thus the gradient can decrease exponentially quickly to zero (or, in rare cases, grow exponentially quickly and “explode” to take very large values).
During training, you might therefore see the magnitude (or norm) of the gradient for the earlier layers descrease to zero very rapidly as training proceeds:
![](https://oscdn.geek-share.com/Uploads/Images/Content/202006/01/792531456b567cef647d384b0ac4ce04)
Figure 1 : Vanishing gradient
The speed of learning decreases very rapidly for the early layers as the network trains
You are now going to solve this problem by building a Residual Network!
![](http://blog.csdn.net/hongbin_xu/article/details/images/skip_connection_kiank.png)
Figure 2 : A ResNet block showing a skip-connection
The image on the left shows the “main path” through the network. The image on the right adds a shortcut to the main path. By stacking these ResNet blocks on top of each other, you can form a very deep network.
We also saw in lecture that having ResNet blocks with the shortcut also makes it very easy for one of the blocks to learn an identity function. This means that you can stack on additional ResNet blocks with little risk of harming training set performance. (There is also some evidence that the ease of learning an identity function–even more than skip connections helping with vanishing gradients–accounts for ResNets’ remarkable performance.)
Two main types of blocks are used in a ResNet, depending mainly on whether the input/output dimensions are same or different. You are going to implement both of them.
![](https://oscdn.geek-share.com/Uploads/Images/Content/202006/01/3211bcc0d83a9aed4443cba47b5af608)
Figure 3 : Identity block. Skip connection “skips over” 2 layers.
The upper path is the “shortcut path.” The lower path is the “main path.” In this diagram, we have also made explicit the CONV2D and ReLU steps in each layer. To speed up training we have also added a BatchNorm step. Don’t worry about this being complicated to implement–you’ll see that BatchNorm is just one line of code in Keras!
In this exercise, you’ll actually implement a slightly more powerful version of this identity block, in which the skip connection “skips over” 3 hidden layers rather than 2 layers. It looks like this:
![](https://oscdn.geek-share.com/Uploads/Images/Content/202006/01/ff9a817cdff46a6d415a0256db757354)
Figure 4 : Identity block. Skip connection “skips over” 3 layers.
Here’re the individual steps.
First component of main path:
- The first CONV2D has F1F1 filters of shape (1,1) and a stride of (1,1). Its padding is “valid” and its name should be
- The first BatchNorm is normalizing the channels axis. Its name should be
- Then apply the ReLU activation function. This has no name and no hyperparameters.
Second component of main path:
- The second CONV2D has F2F2 filters of shape (f,f)(f,f) and a stride of (1,1). Its padding is “same” and its name should be
- The second BatchNorm is normalizing the channels axis. Its name should be
- Then apply the ReLU activation function. This has no name and no hyperparameters.
Third component of main path:
- The third CONV2D has F3F3 filters of shape (1,1) and a stride of (1,1). Its padding is “valid” and its name should be
- The third BatchNorm is normalizing the channels axis. Its name should be
Final step:
- The shortcut and the input are added together.
- Then apply the ReLU activation function. This has no name and no hyperparameters.
Exercise: Implement the ResNet identity block. We have implemented the first component of the main path. Please read over this carefully to make sure you understand what it is doing. You should implement the rest.
- To implement the Conv2D step: See reference
- To implement BatchNorm: See reference (axis: Integer, the axis that should be normalized (typically the channels axis))
- For the activation, use:
- To add the value passed forward by the shortcut: See reference
Expected Output:
![](https://oscdn.geek-share.com/Uploads/Images/Content/202006/01/68c7d0f5bce967ee35ea08353fc8d480)
Figure 4 : Convolutional block
The CONV2D layer in the shortcut path is used to resize the input xx to a different dimension, so that the dimensions match up in the final addition needed to add the shortcut value back to the main path. (This plays a similar role as the matrix WsWs discussed in lecture.) For example, to reduce the activation dimensions’s height and width by a factor of 2, you can use a 1x1 convolution with a stride of 2. The CONV2D layer on the shortcut path does not use any non-linear activation function. Its main role is to just apply a (learned) linear function that reduces the dimension of the input, so that the dimensions match up for the later addition step.
The details of the convolutional block are as follows.
First component of main path:
- The first CONV2D has F1F1 filters of shape (1,1) and a stride of (s,s). Its padding is “valid” and its name should be
- The first BatchNorm is normalizing the channels axis. Its name should be
- Then apply the ReLU activation function. This has no name and no hyperparameters.
Second component of main path:
- The second CONV2D has F2F2 filters of (f,f) and a stride of (1,1). Its padding is “same” and it’s name should be
- The second BatchNorm is normalizing the channels axis. Its name should be
- Then apply the ReLU activation function. This has no name and no hyperparameters.
Third component of main path:
- The third CONV2D has F3F3 filters of (1,1) and a stride of (1,1). Its padding is “valid” and it’s name should be
- The third BatchNorm is normalizing the channels axis. Its name should be
Shortcut path:
- The CONV2D has F3F3 filters of shape (1,1) and a stride of (s,s). Its padding is “valid” and its name should be
- The BatchNorm is normalizing the channels axis. Its name should be
Final step:
- The shortcut and the main path values are added together.
- Then apply the ReLU activation function. This has no name and no hyperparameters.
Exercise: Implement the convolutional block. We have implemented the first component of the main path; you should implement the rest. As before, always use 0 as the seed for the random initialization, to ensure consistency with our grader.
- Conv Hint
- BatchNorm Hint (axis: Integer, the axis that should be normalized (typically the features axis))
- For the activation, use:
- Addition Hint
Expected Output:
![](https://oscdn.geek-share.com/Uploads/Images/Content/202006/01/24e9c07f3ae93bdca4cd771d1453a6ca)
Figure 5 : ResNet-50 model
The details of this ResNet-50 model are:
- Zero-padding pads the input with a pad of (3,3)
- Stage 1:
- The 2D Convolution has 64 filters of shape (7,7) and uses a stride of (2,2). Its name is “conv1”.
- BatchNorm is applied to the channels axis of the input.
- MaxPooling uses a (3,3) window and a (2,2) stride.
- Stage 2:
- The convolutional block uses three set of filters of size [64,64,256], “f” is 3, “s” is 1 and the block is “a”.
- The 2 identity blocks use three set of filters of size [64,64,256], “f” is 3 and the blocks are “b” and “c”.
- Stage 3:
- The convolutional block uses three set of filters of size [128,128,512], “f” is 3, “s” is 2 and the block is “a”.
- The 3 identity blocks use three set of filters of size [128,128,512], “f” is 3 and the blocks are “b”, “c” and “d”.
- Stage 4:
- The convolutional block uses three set of filters of size [256, 256, 1024], “f” is 3, “s” is 2 and the block is “a”.
- The 5 identity blocks use three set of filters of size [256, 256, 1024], “f” is 3 and the blocks are “b”, “c”, “d”, “e” and “f”.
- Stage 5:
- The convolutional block uses three set of filters of size [512, 512, 2048], “f” is 3, “s” is 2 and the block is “a”.
- The 2 identity blocks use three set of filters of size [256, 256, 2048], “f” is 3 and the blocks are “b” and “c”.
- The 2D Average Pooling uses a window of shape (2,2) and its name is “avg_pool”.
- The flatten doesn’t have any hyperparameters or name.
- The Fully Connected (Dense) layer reduces its input to the number of classes using a softmax activation. Its name should be
Exercise: Implement the ResNet with 50 layers described in the figure above. We have implemented Stages 1 and 2. Please implement the rest. (The syntax for implementing Stages 3-5 should be quite similar to that of Stage 2.) Make sure you follow the naming convention in the text above.
You’ll need to use this function:
- Average pooling see reference
Here’re some other functions we used in the code below:
- Conv2D: See reference
- BatchNorm: See reference (axis: Integer, the axis that should be normalized (typically the features axis))
- Zero padding: See reference
- Max pooling: See reference
- Fully conected layer: See reference
- Addition: See reference
Run the following code to build the model’s graph. If your implementation is not correct you will know it by checking your accuracy when running
As seen in the Keras Tutorial Notebook, prior training a model, you need to configure the learning process by compiling the model.
The model is now ready to be trained. The only thing you need is a dataset.
Let’s load the SIGNS Dataset.
![](https://oscdn.geek-share.com/Uploads/Images/Content/202006/01/4b379ef643de980861a5f96b82fc7f21)
Figure 6 : SIGNS dataset
Run the following cell to train your model on 2 epochs with a batch size of 32. On a CPU it should take you around 5min per epoch.
Expected Output:
Let’s see how this model (trained on only two epochs) performs on the test set.
Expected Output:
For the purpose of this assignment, we’ve asked you to train the model only for two epochs. You can see that it achieves poor performances. Please go ahead and submit your assignment; to check correctness, the online grader will run your code only for a small number of epochs as well.
After you have finished this official (graded) part of this assignment, you can also optionally train the ResNet for more iterations, if you want. We get a lot better performance when we train for ~20 epochs, but this will take more than an hour when training on a CPU.
Using a GPU, we’ve trained our own ResNet50 model’s weights on the SIGNS dataset. You can load and run our trained model on the test set in the cells below. It may take ≈1min to load the model.
注:这里我导入的是自己在GPU服务器上训练的模型,故修改成了对应的名字。
ResNet50 is a powerful model for image classification when it is trained for an adequate number of iterations. We hope you can use what you’ve learnt and apply it to your own classification problem to perform state-of-the-art accuracy.
Congratulations on finishing this assignment! You’ve now implemented a state-of-the-art image classification system!
1. Click on “File” in the upper bar of this notebook, then click “Open” to go on your Coursera Hub.
2. Add your image to this Jupyter Notebook’s directory, in the “images” folder
3. Write your image’s name in the following code
4. Run the code and check if the algorithm is right!
![](https://oscdn.geek-share.com/Uploads/Images/Content/202006/01/0f2d6bb947a612d972f4f86786d38e20)
You can also print a summary of your model by running the following code.
(打印出resnet的网络结构,省略)
Finally, run the code below to visualize your ResNet50. You can also download a .png picture of your model by going to “File -> Open…-> model.png”.
(画出图表表示resnet,省略)
What you should remember:
- Very deep “plain” networks don’t work in practice because they are hard to train due to vanishing gradients.
- The skip-connections help to address the Vanishing Gradient problem. They also make it easy for a ResNet block to learn an identity function.
- There are two main type of blocks: The identity block and the convolutional block.
- Very deep Residual Networks are built by stacking these blocks together.
Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun - Deep Residual Learning for Image Recognition (2015)
Francois Chollet’s github repository: https://github.com/fchollet/deep-learning-models/blob/master/resnet50.py
补充说明:
1. 评论中总有人问为什么直接复制这些notebook运行不了?请不要直接复制粘贴,不可能运行通过的,这个只是notebook中我们要自己写的那部分,要正确运行还需要其他py文件,请自己到GitHub上下载完整的。这里的部分仅仅是参考用的,建议还是自己按照提示一点一点写,如果实在卡住了再看答案。个人觉得这样才是正确的学习方法,况且作业也不算难。
2. 关于评论中有人说我是抄袭,注释还没别人详细,复制下来还运行不过。答复是:做伸手党之前,请先搞清这个作业是干什么的。大家都是从GitHub上下载原始的作业,然后根据代码前面的提示(通常会指定函数和公式)来编写代码,而且后面还有expected output供你比对,如果程序正确,结果一般来说是一样的。请不要无脑喷,说什么跟别人的答案一样的。说到底,我们要做的就是,看他的文字部分,根据提示在代码中加入部分自己的代码。我们自己要写的部分只有那么一小部分代码。
3. 由于实在很反感无脑喷子,故禁止了下面的评论功能,请见谅。如果有问题,请私信我,在力所能及的范围内会尽量帮忙。
由于这部分的作业后面要自己训练一个ResNet-50的网络,训练耗时较长。如果是CPU模式,一个epoch大约100s,但在我的GPU服务器上,一次epoch大约5s。不使用GPU训练,实在是耗时太长,没有GPU的话建议还是先用训练好的模型。我训练好的几个模型,下面给出百度云链接。
resnet50_20_epochs.h5 链接:https://pan.baidu.com/s/1eROf3BO 密码:qed2
resnet50_30_epochs.h5 链接:https://pan.baidu.com/s/1o8kPNUM 密码:tqio
resnet50_44_epochs.h5 链接:https://pan.baidu.com/s/1c1N3AzI 密码:2xwu
resnet50_55_epochs.h5 链接:https://pan.baidu.com/s/1bpfMA0v 密码:cxcv
Coursera上提供的模型文件:
ResNet50.h5 链接:链接:https://pan.baidu.com/s/1boCG2Iz 密码:sefq
Residual Networks
Welcome to the second assignment of this week! You will learn how to build very deep convolutional networks, using Residual Networks (ResNets). In theory, very deep networks can represent very complex functions; but in practice, they are hard to train. Residual Networks, introduced by He et al., allow you to train much deeper networks than were previously practically feasible.In this assignment, you will:
- Implement the basic building blocks of ResNets.
- Put together these building blocks to implement and train a state-of-the-art neural network for image classification.
This assignment will be done in Keras.
Before jumping into the problem, let’s run the cell below to load the required packages.
import numpy as np from keras import layers from keras.layers import Input, Add, Dense, Activation, ZeroPadding2D, BatchNormalization, Flatten, Conv2D, AveragePooling2D, MaxPooling2D, GlobalMaxPooling2D from keras.models import Model, load_model from keras.preprocessing import image from keras.utils import layer_utils from keras.utils.data_utils import get_file from keras.applications.imagenet_utils import preprocess_input import pydot from IPython.display import SVG from keras.utils.vis_utils import model_to_dot from keras.utils import plot_model from resnets_utils import * from keras.initializers import glorot_uniform import scipy.misc from matplotlib.pyplot import imshow %matplotlib inline import keras.backend as K K.set_image_data_format('channels_last') K.set_learning_phase(1)
Using TensorFlow backend.
1 - The problem of very deep neural networks
Last week, you built your first convolutional neural network. In recent years, neural networks have become deeper, with state-of-the-art networks going from just a few layers (e.g., AlexNet) to over a hundred layers.The main benefit of a very deep network is that it can represent very complex functions. It can also learn features at many different levels of abstraction, from edges (at the lower layers) to very complex features (at the deeper layers). However, using a deeper network doesn’t always help. A huge barrier to training them is vanishing gradients: very deep networks often have a gradient signal that goes to zero quickly, thus making gradient descent unbearably slow. More specifically, during gradient descent, as you backprop from the final layer back to the first layer, you are multiplying by the weight matrix on each step, and thus the gradient can decrease exponentially quickly to zero (or, in rare cases, grow exponentially quickly and “explode” to take very large values).
During training, you might therefore see the magnitude (or norm) of the gradient for the earlier layers descrease to zero very rapidly as training proceeds:
Figure 1 : Vanishing gradient
The speed of learning decreases very rapidly for the early layers as the network trains
You are now going to solve this problem by building a Residual Network!
2 - Building a Residual Network
In ResNets, a “shortcut” or a “skip connection” allows the gradient to be directly backpropagated to earlier layers:![](http://blog.csdn.net/hongbin_xu/article/details/images/skip_connection_kiank.png)
Figure 2 : A ResNet block showing a skip-connection
The image on the left shows the “main path” through the network. The image on the right adds a shortcut to the main path. By stacking these ResNet blocks on top of each other, you can form a very deep network.
We also saw in lecture that having ResNet blocks with the shortcut also makes it very easy for one of the blocks to learn an identity function. This means that you can stack on additional ResNet blocks with little risk of harming training set performance. (There is also some evidence that the ease of learning an identity function–even more than skip connections helping with vanishing gradients–accounts for ResNets’ remarkable performance.)
Two main types of blocks are used in a ResNet, depending mainly on whether the input/output dimensions are same or different. You are going to implement both of them.
2.1 - The identity block
The identity block is the standard block used in ResNets, and corresponds to the case where the input activation (say a[l]a[l]) has the same dimension as the output activation (say a[l+2]a[l+2]). To flesh out the different steps of what happens in a ResNet’s identity block, here is an alternative diagram showing the individual steps:Figure 3 : Identity block. Skip connection “skips over” 2 layers.
The upper path is the “shortcut path.” The lower path is the “main path.” In this diagram, we have also made explicit the CONV2D and ReLU steps in each layer. To speed up training we have also added a BatchNorm step. Don’t worry about this being complicated to implement–you’ll see that BatchNorm is just one line of code in Keras!
In this exercise, you’ll actually implement a slightly more powerful version of this identity block, in which the skip connection “skips over” 3 hidden layers rather than 2 layers. It looks like this:
Figure 4 : Identity block. Skip connection “skips over” 3 layers.
Here’re the individual steps.
First component of main path:
- The first CONV2D has F1F1 filters of shape (1,1) and a stride of (1,1). Its padding is “valid” and its name should be
conv_name_base + '2a'. Use 0 as the seed for the random initialization.
- The first BatchNorm is normalizing the channels axis. Its name should be
bn_name_base + '2a'.
- Then apply the ReLU activation function. This has no name and no hyperparameters.
Second component of main path:
- The second CONV2D has F2F2 filters of shape (f,f)(f,f) and a stride of (1,1). Its padding is “same” and its name should be
conv_name_base + '2b'. Use 0 as the seed for the random initialization.
- The second BatchNorm is normalizing the channels axis. Its name should be
bn_name_base + '2b'.
- Then apply the ReLU activation function. This has no name and no hyperparameters.
Third component of main path:
- The third CONV2D has F3F3 filters of shape (1,1) and a stride of (1,1). Its padding is “valid” and its name should be
conv_name_base + '2c'. Use 0 as the seed for the random initialization.
- The third BatchNorm is normalizing the channels axis. Its name should be
bn_name_base + '2c'. Note that there is no ReLU activation function in this component.
Final step:
- The shortcut and the input are added together.
- Then apply the ReLU activation function. This has no name and no hyperparameters.
Exercise: Implement the ResNet identity block. We have implemented the first component of the main path. Please read over this carefully to make sure you understand what it is doing. You should implement the rest.
- To implement the Conv2D step: See reference
- To implement BatchNorm: See reference (axis: Integer, the axis that should be normalized (typically the channels axis))
- For the activation, use:
Activation('relu')(X)
- To add the value passed forward by the shortcut: See reference
# GRADED FUNCTION: identity_block def identity_block(X, f, filters, stage, block): """ Implementation of the identity block as defined in Figure 3 Arguments: X -- input tensor of shape (m, n_H_prev, n_W_prev, n_C_prev) f -- integer, specifying the shape of the middle CONV's window for the main path filters -- python list of integers, defining the number of filters in the CONV layers of the main path stage -- integer, used to name the layers, depending on their position in the network block -- string/character, used to name the layers, depending on their position in the network Returns: X -- output of the identity block, tensor of shape (n_H, n_W, n_C) """ # defining name basis conv_name_base = 'res' + str(stage) + block + '_branch' bn_name_base = 'bn' + str(stage) + block + '_branch' # Retrieve Filters F1, F2, F3 = filters # Save the input value. You'll need this later to add back to the main path. X_shortcut = X # First component of main path X = Conv2D(filters = F1, kernel_size = (1, 1), strides = (1,1), padding = 'valid', name = conv_name_base + '2a', kernel_initializer = glorot_uniform(seed=0))(X) X = BatchNormalization(axis = 3, name = bn_name_base + '2a')(X) X = Activation('relu')(X) ### START CODE HERE ### # Second component of main path (≈3 lines) X = Conv2D(filters=F2, kernel_size=(f,f), strides=(1,1), padding='same', name=conv_name_base+'2b', kernel_initializer=glorot_uniform(seed=0))(X) X = BatchNormalization(axis=3, name=bn_name_base + '2b')(X) X = Activation('relu')(X) # Third component of main path (≈2 lines) X = Conv2D(filters=F3, kernel_size=(1,1), strides=(1,1), padding='valid', name=conv_name_base + '2c', kernel_initializer=glorot_uniform(seed=0))(X) X = BatchNormalization(axis=3, name=bn_name_base + '2c')(X) # Final step: Add shortcut value to main path, and pass it through a RELU activation (≈2 lines) X = Add()([X, X_shortcut]) X = Activation('relu')(X) ### END CODE HERE ### return X
tf.reset_default_graph() with tf.Session() as test: np.random.seed(1) A_prev = tf.placeholder("float", [3, 4, 4, 6]) X = np.random.randn(3, 4, 4, 6) A = identity_block(A_prev, f = 2, filters = [2, 4, 6], stage = 1, block = 'a') test.run(tf.global_variables_initializer()) out = test.run([A], feed_dict={A_prev: X, K.learning_phase(): 0}) print("out = " + str(out[0][1][1][0]))
out = [ 0.94822997 0. 1.16101444 2.747859 0. 1.36677003]
Expected Output:
out | [ 0.94822985 0. 1.16101444 2.747859 0. 1.36677003] |
2.2 - The convolutional block
You’ve implemented the ResNet identity block. Next, the ResNet “convolutional block” is the other type of block. You can use this type of block when the input and output dimensions don’t match up. The difference with the identity block is that there is a CONV2D layer in the shortcut path:Figure 4 : Convolutional block
The CONV2D layer in the shortcut path is used to resize the input xx to a different dimension, so that the dimensions match up in the final addition needed to add the shortcut value back to the main path. (This plays a similar role as the matrix WsWs discussed in lecture.) For example, to reduce the activation dimensions’s height and width by a factor of 2, you can use a 1x1 convolution with a stride of 2. The CONV2D layer on the shortcut path does not use any non-linear activation function. Its main role is to just apply a (learned) linear function that reduces the dimension of the input, so that the dimensions match up for the later addition step.
The details of the convolutional block are as follows.
First component of main path:
- The first CONV2D has F1F1 filters of shape (1,1) and a stride of (s,s). Its padding is “valid” and its name should be
conv_name_base + '2a'.
- The first BatchNorm is normalizing the channels axis. Its name should be
bn_name_base + '2a'.
- Then apply the ReLU activation function. This has no name and no hyperparameters.
Second component of main path:
- The second CONV2D has F2F2 filters of (f,f) and a stride of (1,1). Its padding is “same” and it’s name should be
conv_name_base + '2b'.
- The second BatchNorm is normalizing the channels axis. Its name should be
bn_name_base + '2b'.
- Then apply the ReLU activation function. This has no name and no hyperparameters.
Third component of main path:
- The third CONV2D has F3F3 filters of (1,1) and a stride of (1,1). Its padding is “valid” and it’s name should be
conv_name_base + '2c'.
- The third BatchNorm is normalizing the channels axis. Its name should be
bn_name_base + '2c'. Note that there is no ReLU activation function in this component.
Shortcut path:
- The CONV2D has F3F3 filters of shape (1,1) and a stride of (s,s). Its padding is “valid” and its name should be
conv_name_base + '1'.
- The BatchNorm is normalizing the channels axis. Its name should be
bn_name_base + '1'.
Final step:
- The shortcut and the main path values are added together.
- Then apply the ReLU activation function. This has no name and no hyperparameters.
Exercise: Implement the convolutional block. We have implemented the first component of the main path; you should implement the rest. As before, always use 0 as the seed for the random initialization, to ensure consistency with our grader.
- Conv Hint
- BatchNorm Hint (axis: Integer, the axis that should be normalized (typically the features axis))
- For the activation, use:
Activation('relu')(X)
- Addition Hint
# GRADED FUNCTION: convolutional_block def convolutional_block(X, f, filters, stage, block, s = 2): """ Implementation of the convolutional block as defined in Figure 4 Arguments: X -- input tensor of shape (m, n_H_prev, n_W_prev, n_C_prev) f -- integer, specifying the shape of the middle CONV's window for the main path filters -- python list of integers, defining the number of filters in the CONV layers of the main path stage -- integer, used to name the layers, depending on their position in the network block -- string/character, used to name the layers, depending on their position in the network s -- Integer, specifying the stride to be used Returns: X -- output of the convolutional block, tensor of shape (n_H, n_W, n_C) """ # defining name basis conv_name_base = 'res' + str(stage) + block + '_branch' bn_name_base = 'bn' + str(stage) + block + '_branch' # Retrieve Filters F1, F2, F3 = filters # Save the input value X_shortcut = X ##### MAIN PATH ##### # First component of main path X = Conv2D(F1, (1, 1), strides = (s,s), name = conv_name_base + '2a', kernel_initializer = glorot_uniform(seed=0))(X) X = BatchNormalization(axis = 3, name = bn_name_base + '2a')(X) X = Activation('relu')(X) ### START CODE HERE ### # Second component of main path (≈3 lines) X = Conv2D(filters=F2, kernel_size=(f,f), strides=(1,1), padding='same', name=conv_name_base+'2b', kernel_initializer=glorot_uniform(seed=0))(X) X = BatchNormalization(axis=3, name=bn_name_base+'2b')(X) X = Activation('relu')(X) # Third component of main path (≈2 lines) X = Conv2D(filters=F3, kernel_size=(1,1), strides=(1,1), padding='valid', name=conv_name_base+'2c', kernel_initializer=glorot_uniform(seed=0))(X) X = BatchNormalization(axis=3, name=bn_name_base+'2c')(X) ##### SHORTCUT PATH #### (≈2 lines) X_shortcut = Conv2D(filters=F3, kernel_size=(1,1), strides=(s, s), padding='valid', name=conv_name_base+'1', kernel_initializer=glorot_uniform(seed=0))(X_shortcut) X_shortcut = BatchNormalization(axis=3, name=bn_name_base+'1')(X_shortcut) # Final step: Add shortcut value to main path, and pass it through a RELU activation (≈2 lines) X = Add()([X, X_shortcut]) X = Activation('relu')(X) ### END CODE HERE ### return X
tf.reset_default_graph() with tf.Session() as test: np.random.seed(1) A_prev = tf.placeholder("float", [3, 4, 4, 6]) X = np.random.randn(3, 4, 4, 6) A = convolutional_block(A_prev, f = 2, filters = [2, 4, 6], stage = 1, block = 'a') test.run(tf.global_variables_initializer()) out = test.run([A], feed_dict={A_prev: X, K.learning_phase(): 0}) # print(len(out[0])) # print(out) print("out = " + str(out[0][1][1][0]))
out = [ 0.09018461 1.23489773 0.46822017 0.0367176 0. 0.65516603]
Expected Output:
out | [ 0.09018463 1.23489773 0.46822017 0.0367176 0. 0.65516603] |
3 - Building your first ResNet model (50 layers)
You now have the necessary blocks to build a very deep ResNet. The following figure describes in detail the architecture of this neural network. “ID BLOCK” in the diagram stands for “Identity block,” and “ID BLOCK x3” means you should stack 3 identity blocks together.Figure 5 : ResNet-50 model
The details of this ResNet-50 model are:
- Zero-padding pads the input with a pad of (3,3)
- Stage 1:
- The 2D Convolution has 64 filters of shape (7,7) and uses a stride of (2,2). Its name is “conv1”.
- BatchNorm is applied to the channels axis of the input.
- MaxPooling uses a (3,3) window and a (2,2) stride.
- Stage 2:
- The convolutional block uses three set of filters of size [64,64,256], “f” is 3, “s” is 1 and the block is “a”.
- The 2 identity blocks use three set of filters of size [64,64,256], “f” is 3 and the blocks are “b” and “c”.
- Stage 3:
- The convolutional block uses three set of filters of size [128,128,512], “f” is 3, “s” is 2 and the block is “a”.
- The 3 identity blocks use three set of filters of size [128,128,512], “f” is 3 and the blocks are “b”, “c” and “d”.
- Stage 4:
- The convolutional block uses three set of filters of size [256, 256, 1024], “f” is 3, “s” is 2 and the block is “a”.
- The 5 identity blocks use three set of filters of size [256, 256, 1024], “f” is 3 and the blocks are “b”, “c”, “d”, “e” and “f”.
- Stage 5:
- The convolutional block uses three set of filters of size [512, 512, 2048], “f” is 3, “s” is 2 and the block is “a”.
- The 2 identity blocks use three set of filters of size [256, 256, 2048], “f” is 3 and the blocks are “b” and “c”.
- The 2D Average Pooling uses a window of shape (2,2) and its name is “avg_pool”.
- The flatten doesn’t have any hyperparameters or name.
- The Fully Connected (Dense) layer reduces its input to the number of classes using a softmax activation. Its name should be
'fc' + str(classes).
Exercise: Implement the ResNet with 50 layers described in the figure above. We have implemented Stages 1 and 2. Please implement the rest. (The syntax for implementing Stages 3-5 should be quite similar to that of Stage 2.) Make sure you follow the naming convention in the text above.
You’ll need to use this function:
- Average pooling see reference
Here’re some other functions we used in the code below:
- Conv2D: See reference
- BatchNorm: See reference (axis: Integer, the axis that should be normalized (typically the features axis))
- Zero padding: See reference
- Max pooling: See reference
- Fully conected layer: See reference
- Addition: See reference
# GRADED FUNCTION: ResNet50 def ResNet50(input_shape = (64, 64, 3), classes = 6): """ Implementation of the popular ResNet50 the following architecture: CONV2D -> BATCHNORM -> RELU -> MAXPOOL -> CONVBLOCK -> IDBLOCK*2 -> CONVBLOCK -> IDBLOCK*3 -> CONVBLOCK -> IDBLOCK*5 -> CONVBLOCK -> IDBLOCK*2 -> AVGPOOL -> TOPLAYER Arguments: input_shape -- shape of the images of the dataset classes -- integer, number of classes Returns: model -- a Model() instance in Keras """ # Define the input as a tensor with shape input_shape X_input = Input(input_shape) # Zero-Padding X = ZeroPadding2D((3, 3))(X_input) # Stage 1 X = Conv2D(64, (7, 7), strides = (2, 2), name = 'conv1', kernel_initializer = glorot_uniform(seed=0))(X) X = BatchNormalization(axis = 3, name = 'bn_conv1')(X) X = Activation('relu')(X) X = MaxPooling2D((3, 3), strides=(2, 2))(X) # Stage 2 X = convolutional_block(X, f = 3, filters = [64, 64, 256], stage = 2, block='a', s = 1) X = identity_block(X, 3, [64, 64, 256], stage=2, block='b') X = identity_block(X, 3, [64, 64, 256], stage=2, block='c') ### START CODE HERE ### # helper functions # convolutional_block(X, f, filters, stage, block, s = 2) # identity_block(X, f, filters, stage, block) # Stage 3 (≈4 lines) X = convolutional_block(X, f=3, filters=[128, 128, 512], stage=3, block='a', s=2) X = identity_block(X, f=3, filters=[128, 128, 512], stage=3, block='b') X = identity_block(X, f=3, filters=[128, 128, 512], stage=3, block='c') X = identity_block(X, f=3, filters=[128, 128, 512], stage=3, block='d') # Stage 4 (≈6 lines) X = convolutional_block(X, f=3, filters=[256, 256, 1024], stage=4, block='a', s=2) X = identity_block(X, f=3, filters=[256, 256, 1024], stage=4, block='b') X = identity_block(X, f=3, filters=[256, 256, 1024], stage=4, block='c') X = identity_block(X, f=3, filters=[256, 256, 1024], stage=4, block='d') X = identity_block(X, f=3, filters=[256, 256, 1024], stage=4, block='e') X = identity_block(X, f=3, filters=[256, 256, 1024], stage=4, block='f') # Stage 5 (≈3 lines) X = convolutional_block(X, f=3, filters=[512, 512, 2048], stage=5, block='a', s=2) X = identity_block(X, f=3, filters=[512, 512, 2048], stage=5, block='b') X = identity_block(X, f=3, filters=[512, 512, 2048], stage=5, block='c') # AVGPOOL (≈1 line). Use "X = AveragePooling2D(...)(X)" X = AveragePooling2D((2,2), name='avg_pool')(X) ### END CODE HERE ### # output layer X = Flatten()(X) X = Dense(classes, activation='softmax', name='fc' + str(classes), kernel_initializer = glorot_uniform(seed=0))(X) # Create model model = Model(inputs = X_input, outputs = X, name='ResNet50') return model
Run the following code to build the model’s graph. If your implementation is not correct you will know it by checking your accuracy when running
model.fit(...)below.
model = ResNet50(input_shape = (64, 64, 3), classes = 6)
As seen in the Keras Tutorial Notebook, prior training a model, you need to configure the learning process by compiling the model.
model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])
The model is now ready to be trained. The only thing you need is a dataset.
Let’s load the SIGNS Dataset.
Figure 6 : SIGNS dataset
X_train_orig, Y_train_orig, X_test_orig, Y_test_orig, classes = load_dataset() # Normalize image vectors X_train = X_train_orig/255. X_test = X_test_orig/255. # Convert training and test labels to one hot matrices Y_train = convert_to_one_hot(Y_train_orig, 6).T Y_test = convert_to_one_hot(Y_test_orig, 6).T print ("number of training examples = " + str(X_train.shape[0])) print ("number of test examples = " + str(X_test.shape[0])) print ("X_train shape: " + str(X_train.shape)) print ("Y_train shape: " + str(Y_train.shape)) print ("X_test shape: " + str(X_test.shape)) print ("Y_test shape: " + str(Y_test.shape))
number of training examples = 1080 number of test examples = 120 X_train shape: (1080, 64, 64, 3) Y_train shape: (1080, 6) X_test shape: (120, 64, 64, 3) Y_test shape: (120, 6)
Run the following cell to train your model on 2 epochs with a batch size of 32. On a CPU it should take you around 5min per epoch.
model.fit(X_train, Y_train, epochs = 2, batch_size = 32)
Epoch 1/2 1080/1080 [==============================] - 107s 99ms/step - loss: 3.0556 - acc: 0.2481 Epoch 2/2 1080/1080 [==============================] - 103s 95ms/step - loss: 2.4399 - acc: 0.3278 <keras.callbacks.History at 0x7ff1c149af60>
Expected Output:
Epoch 1/2 | loss: between 1 and 5, acc: between 0.2 and 0.5, although your results can be different from ours. |
Epoch 2/2 | loss: between 1 and 5, acc: between 0.2 and 0.5, you should see your loss decreasing and the accuracy increasing. |
preds = model.evaluate(X_test, Y_test) print ("Loss = " + str(preds[0])) print ("Test Accuracy = " + str(preds[1]))
120/120 [==============================] - 5s 38ms/step Loss = 2.24901695251 Test Accuracy = 0.166666666667
Expected Output:
Test Accuracy | between 0.16 and 0.25 |
After you have finished this official (graded) part of this assignment, you can also optionally train the ResNet for more iterations, if you want. We get a lot better performance when we train for ~20 epochs, but this will take more than an hour when training on a CPU.
Using a GPU, we’ve trained our own ResNet50 model’s weights on the SIGNS dataset. You can load and run our trained model on the test set in the cells below. It may take ≈1min to load the model.
注:这里我导入的是自己在GPU服务器上训练的模型,故修改成了对应的名字。
# model = load_model('ResNet50.h5') model = load_model('resnet50_44_epochs.h5')
preds = model.evaluate(X_test, Y_test) print ("Loss = " + str(preds[0])) print ("Test Accuracy = " + str(preds[1]))
120/120 [==============================] - 9s 78ms/step Loss = 0.0914498666922 Test Accuracy = 0.958333337307
ResNet50 is a powerful model for image classification when it is trained for an adequate number of iterations. We hope you can use what you’ve learnt and apply it to your own classification problem to perform state-of-the-art accuracy.
Congratulations on finishing this assignment! You’ve now implemented a state-of-the-art image classification system!
4 - Test on your own image (Optional/Ungraded)
If you wish, you can also take a picture of your own hand and see the output of the model. To do this:1. Click on “File” in the upper bar of this notebook, then click “Open” to go on your Coursera Hub.
2. Add your image to this Jupyter Notebook’s directory, in the “images” folder
3. Write your image’s name in the following code
4. Run the code and check if the algorithm is right!
img_path = 'images/my_image.jpg' img = image.load_img(img_path, target_size=(64, 64)) x = image.img_to_array(img) x = np.expand_dims(x, axis=0) x = preprocess_input(x) print('Input image shape:', x.shape) my_image = scipy.misc.imread(img_path) imshow(my_image) print("class prediction vector [p(0), p(1), p(2), p(3), p(4), p(5)] = ") print(model.predict(x))
Input image shape: (1, 64, 64, 3) class prediction vector [p(0), p(1), p(2), p(3), p(4), p(5)] = [[ 1. 0. 0. 0. 0. 0.]]
You can also print a summary of your model by running the following code.
model.summary()
(打印出resnet的网络结构,省略)
Finally, run the code below to visualize your ResNet50. You can also download a .png picture of your model by going to “File -> Open…-> model.png”.
plot_model(model, to_file='model.png') SVG(model_to_dot(model).create(prog='dot', format='svg'))
(画出图表表示resnet,省略)
What you should remember:
- Very deep “plain” networks don’t work in practice because they are hard to train due to vanishing gradients.
- The skip-connections help to address the Vanishing Gradient problem. They also make it easy for a ResNet block to learn an identity function.
- There are two main type of blocks: The identity block and the convolutional block.
- Very deep Residual Networks are built by stacking these blocks together.
References
This notebook presents the ResNet algorithm due to He et al. (2015). The implementation here also took significant inspiration and follows the structure given in the github repository of Francois Chollet:Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun - Deep Residual Learning for Image Recognition (2015)
Francois Chollet’s github repository: https://github.com/fchollet/deep-learning-models/blob/master/resnet50.py
训练过程中的代码
epoch = 20
model1 = ResNet50(input_shape = (64, 64, 3), classes = 6) model1.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy']) model1.fit(X_train, Y_train, epochs = 20, batch_size = 32) model1.save('resnet50_20_epochs.h5') preds = model1.evaluate(X_test, Y_test) print ("Loss = " + str(preds[0])) print ("Test Accuracy = " + str(preds[1]))
Epoch 1/20 1080/1080 [==============================] - 15s 14ms/step - loss: 2.5141 - acc: 0.4241 Epoch 2/20 1080/1080 [==============================] - 5s 5ms/step - loss: 1.7727 - acc: 0.6194 Epoch 3/20 1080/1080 [==============================] - 6s 5ms/step - loss: 1.4935 - acc: 0.6769 Epoch 4/20 1080/1080 [==============================] - 5s 5ms/step - loss: 1.5494 - acc: 0.5833 Epoch 5/20 1080/1080 [==============================] - 5s 5ms/step - loss: 0.6902 - acc: 0.7889 Epoch 6/20 1080/1080 [==============================] - 5s 5ms/step - loss: 0.4155 - acc: 0.8593 Epoch 7/20 1080/1080 [==============================] - 5s 5ms/step - loss: 0.2782 - acc: 0.9139 Epoch 8/20 1080/1080 [==============================] - 5s 5ms/step - loss: 0.1665 - acc: 0.9500 Epoch 9/20 1080/1080 [==============================] - 5s 5ms/step - loss: 0.2578 - acc: 0.9185 Epoch 10/20 1080/1080 [==============================] - 5s 5ms/step - loss: 0.1690 - acc: 0.9435 Epoch 11/20 1080/1080 [==============================] - 5s 5ms/step - loss: 0.0913 - acc: 0.9694 Epoch 12/20 1080/1080 [==============================] - 5s 5ms/step - loss: 0.1389 - acc: 0.9602 Epoch 13/20 1080/1080 [==============================] - 5s 5ms/step - loss: 0.1490 - acc: 0.9444 Epoch 14/20 1080/1080 [==============================] - 5s 5ms/step - loss: 0.1044 - acc: 0.9694 Epoch 15/20 1080/1080 [==============================] - 5s 5ms/step - loss: 0.0435 - acc: 0.9861 Epoch 16/20 1080/1080 [==============================] - 5s 5ms/step - loss: 0.0324 - acc: 0.9926 Epoch 17/20 1080/1080 [==============================] - 5s 5ms/step - loss: 0.0190 - acc: 0.9926 Epoch 18/20 1080/1080 [==============================] - 5s 5ms/step - loss: 0.0577 - acc: 0.9824 Epoch 19/20 1080/1080 [==============================] - 5s 5ms/step - loss: 0.0268 - acc: 0.9907 Epoch 20/20 1080/1080 [==============================] - 5s 5ms/step - loss: 0.0662 - acc: 0.9787 120/120 [==============================] - 2s 17ms/step Loss = 0.825686124961 Test Accuracy = 0.833333333333
epoch = 50
model2 = ResNet50(input_shape = (64, 64, 3), classes = 6) model2.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy']) model2.fit(X_train, Y_train, epochs = 50, batch_size = 32) model2.save('resnet50_50_epochs.h5') preds = model2.evaluate(X_test, Y_test) print ("Loss = " + str(preds[0])) print ("Test Accuracy = " + str(preds[1]))
Epoch 1/50 1080/1080 [==============================] - 18s 17ms/step - loss: 2.1776 - acc: 0.4556 Epoch 2/50 1080/1080 [==============================] - 5s 5ms/step - loss: 1.8498 - acc: 0.5370 Epoch 3/50 1080/1080 [==============================] - 5s 5ms/step - loss: 0.9010 - acc: 0.6852 Epoch 4/50 1080/1080 [==============================] - 5s 5ms/step - loss: 0.4735 - acc: 0.8407 Epoch 5/50 1080/1080 [==============================] - 5s 5ms/step - loss: 0.2245 - acc: 0.9222 Epoch 6/50 1080/1080 [==============================] - 5s 5ms/step - loss: 0.1450 - acc: 0.9611 Epoch 7/50 1080/1080 [==============================] - 5s 5ms/step - loss: 0.7002 - acc: 0.7759 Epoch 8/50 1080/1080 [==============================] - 5s 5ms/step - loss: 0.2651 - acc: 0.9102 Epoch 9/50 1080/1080 [==============================] - 5s 5ms/step - loss: 0.1757 - acc: 0.9481 Epoch 10/50 1080/1080 [==============================] - 5s 5ms/step - loss: 0.1131 - acc: 0.9602 Epoch 11/50 1080/1080 [==============================] - 5s 5ms/step - loss: 0.0816 - acc: 0.9759 Epoch 12/50 1080/1080 [==============================] - 5s 5ms/step - loss: 0.0332 - acc: 0.9907 Epoch 13/50 1080/1080 [==============================] - 5s 5ms/step - loss: 0.0397 - acc: 0.9861 Epoch 14/50 1080/1080 [==============================] - 5s 5ms/step - loss: 0.0305 - acc: 0.9907 Epoch 15/50 1080/1080 [==============================] - 5s 5ms/step - loss: 0.0318 - acc: 0.9889 Epoch 16/50 1080/1080 [==============================] - 5s 5ms/step - loss: 0.0125 - acc: 0.9972 Epoch 17/50 1080/1080 [==============================] - 5s 5ms/step - loss: 0.0279 - acc: 0.9907 Epoch 18/50 1080/1080 [==============================] - 5s 5ms/step - loss: 0.0888 - acc: 0.9657 Epoch 19/50 1080/1080 [==============================] - 5s 5ms/step - loss: 0.0460 - acc: 0.9843 Epoch 20/50 1080/1080 [==============================] - 5s 5ms/step - loss: 0.0512 - acc: 0.9787 Epoch 21/50 1080/1080 [==============================] - 5s 5ms/step - loss: 0.0423 - acc: 0.9843 Epoch 22/50 1080/1080 [==============================] - 6s 5ms/step - loss: 0.0473 - acc: 0.9870 Epoch 23/50 1080/1080 [==============================] - 5s 5ms/step - loss: 0.1245 - acc: 0.9750 Epoch 24/50 1080/1080 [==============================] - 5s 5ms/step - loss: 0.0739 - acc: 0.9741 Epoch 25/50 1080/1080 [==============================] - 5s 5ms/step - loss: 0.0663 - acc: 0.9815 Epoch 26/50 1080/1080 [==============================] - 5s 5ms/step - loss: 0.0175 - acc: 0.9926 Epoch 27/50 1080/1080 [==============================] - 5s 5ms/step - loss: 0.0103 - acc: 0.9981 Epoch 28/50 1080/1080 [==============================] - 5s 5ms/step - loss: 0.0496 - acc: 0.9963 Epoch 29/50 1080/1080 [==============================] - 5s 5ms/step - loss: 0.0023 - acc: 0.9991 Epoch 30/50 1080/1080 [==============================] - 5s 5ms/step - loss: 0.0085 - acc: 0.9972 Epoch 31/50 1080/1080 [==============================] - 5s 5ms/step - loss: 0.0329 - acc: 0.9981 Epoch 32/50 1080/1080 [==============================] - 5s 5ms/step - loss: 0.0039 - acc: 0.9981 Epoch 33/50 1080/1080 [==============================] - 5s 5ms/step - loss: 0.0071 - acc: 0.9981 Epoch 34/50 1080/1080 [==============================] - 5s 5ms/step - loss: 0.0237 - acc: 0.9898 Epoch 35/50 1080/1080 [==============================] - 5s 5ms/step - loss: 0.0667 - acc: 0.9769 Epoch 36/50 1080/1080 [==============================] - 5s 5ms/step - loss: 0.1863 - acc: 0.9500 Epoch 37/50 1080/1080 [==============================] - 5s 5ms/step - loss: 0.0612 - acc: 0.9787 Epoch 38/50 1080/1080 [==============================] - 5s 5ms/step - loss: 0.0362 - acc: 0.9880 Epoch 39/50 1080/1080 [==============================] - 5s 5ms/step - loss: 0.0251 - acc: 0.9935 Epoch 40/50 1080/1080 [==============================] - 5s 5ms/step - loss: 0.0210 - acc: 0.9898 Epoch 41/50 1080/1080 [==============================] - 5s 5ms/step - loss: 0.0090 - acc: 0.9981 Epoch 42/50 1080/1080 [==============================] - 5s 5ms/step - loss: 0.0534 - acc: 0.9870 Epoch 43/50 1080/1080 [==============================] - 5s 5ms/step - loss: 0.0138 - acc: 0.9954 Epoch 44/50 1080/1080 [==============================] - 5s 5ms/step - loss: 0.0055 - acc: 0.9991 Epoch 45/50 1080/1080 [==============================] - 5s 5ms/step - loss: 4.9548e-04 - acc: 1.0000 Epoch 46/50 1080/1080 [==============================] - 5s 5ms/step - loss: 0.0015 - acc: 1.0000 Epoch 47/50 1080/1080 [==============================] - 5s 5ms/step - loss: 0.0086 - acc: 0.9972 Epoch 48/50 1080/1080 [==============================] - 5s 5ms/step - loss: 0.0213 - acc: 0.9944 Epoch 49/50 1080/1080 [==============================] - 5s 5ms/step - loss: 0.0286 - acc: 0.9898 Epoch 50/50 1080/1080 [==============================] - 6s 5ms/step - loss: 0.0228 - acc: 0.9880 120/120 [==============================] - 4s 31ms/step Loss = 1.81704656283 Test Accuracy = 0.69166667064
epoch = 30
model3 = ResNet50(input_shape = (64, 64, 3), classes = 6) model3.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy']) model3.fit(X_train, Y_train, epochs = 30, batch_size = 32) model3.save('resnet50_30_epochs.h5') preds = model3.evaluate(X_test, Y_test) print ("Loss = " + str(preds[0])) print ("Test Accuracy = " + str(preds[1]))
Epoch 1/30 1080/1080 [==============================] - 13s 12ms/step - loss: 1.9815 - acc: 0.4713 Epoch 2/30 1080/1080 [==============================] - 5s 5ms/step - loss: 0.5419 - acc: 0.8120 Epoch 3/30 1080/1080 [==============================] - 5s 5ms/step - loss: 0.4135 - acc: 0.8713 Epoch 4/30 1080/1080 [==============================] - 5s 5ms/step - loss: 0.4284 - acc: 0.8713 Epoch 5/30 1080/1080 [==============================] - 5s 5ms/step - loss: 0.4162 - acc: 0.8722 Epoch 6/30 1080/1080 [==============================] - 5s 5ms/step - loss: 0.1329 - acc: 0.9546 Epoch 7/30 1080/1080 [==============================] - 5s 5ms/step - loss: 0.1342 - acc: 0.9602 Epoch 8/30 1080/1080 [==============================] - 5s 5ms/step - loss: 0.1336 - acc: 0.9630 Epoch 9/30 1080/1080 [==============================] - 5s 5ms/step - loss: 0.1799 - acc: 0.9611 Epoch 10/30 1080/1080 [==============================] - 5s 5ms/step - loss: 0.4936 - acc: 0.8731 Epoch 11/30 1080/1080 [==============================] - 5s 5ms/step - loss: 0.5063 - acc: 0.8398 Epoch 12/30 1080/1080 [==============================] - 5s 5ms/step - loss: 0.1922 - acc: 0.9426 Epoch 13/30 1080/1080 [==============================] - 5s 5ms/step - loss: 0.2066 - acc: 0.9435 Epoch 14/30 1080/1080 [==============================] - 5s 5ms/step - loss: 0.2782 - acc: 0.9139 Epoch 15/30 1080/1080 [==============================] - 5s 5ms/step - loss: 0.2053 - acc: 0.9324 Epoch 16/30 1080/1080 [==============================] - 5s 5ms/step - loss: 0.1632 - acc: 0.9602 Epoch 17/30 1080/1080 [==============================] - 5s 5ms/step - loss: 0.0824 - acc: 0.9787 Epoch 18/30 1080/1080 [==============================] - 5s 5ms/step - loss: 0.0485 - acc: 0.9880 Epoch 19/30 1080/1080 [==============================] - 5s 5ms/step - loss: 0.0145 - acc: 0.9981 Epoch 20/30 1080/1080 [==============================] - 5s 5ms/step - loss: 0.0145 - acc: 0.9944 Epoch 21/30 1080/1080 [==============================] - 5s 5ms/step - loss: 0.0119 - acc: 0.9963 Epoch 22/30 1080/1080 [==============================] - 5s 5ms/step - loss: 0.0280 - acc: 0.9954 Epoch 23/30 1080/1080 [==============================] - 5s 5ms/step - loss: 0.0187 - acc: 0.9917 Epoch 24/30 1080/1080 [==============================] - 5s 5ms/step - loss: 0.0483 - acc: 0.9824 Epoch 25/30 1080/1080 [==============================] - 5s 5ms/step - loss: 0.0939 - acc: 0.9685 Epoch 26/30 1080/1080 [==============================] - 5s 5ms/step - loss: 0.0390 - acc: 0.9907 Epoch 27/30 1080/1080 [==============================] - 5s 5ms/step - loss: 0.0197 - acc: 0.9917 Epoch 28/30 1080/1080 [==============================] - 5s 5ms/step - loss: 0.0270 - acc: 0.9972 Epoch 29/30 1080/1080 [==============================] - 5s 5ms/step - loss: 0.0021 - acc: 1.0000 Epoch 30/30 1080/1080 [==============================] - 5s 5ms/step - loss: 0.0012 - acc: 1.0000 120/120 [==============================] - 1s 11ms/step Loss = 0.0633144895857 Test Accuracy = 0.983333333333
epoch = 44
model4 = ResNet50(input_shape = (64, 64, 3), classes = 6) model4.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy']) model4.fit(X_train, Y_train, epochs = 44, batch_size = 32) model4.save('resnet50_44_epochs.h5') preds = model3.evaluate(X_test, Y_test) print ("Loss = " + str(preds[0])) print ("Test Accuracy = " + str(preds[1]))
Epoch 1/44 1080/1080 [==============================] - 30s 28ms/step - loss: 2.1430 - acc: 0.3972 Epoch 2/44 1080/1080 [==============================] - 5s 5ms/step - loss: 0.9240 - acc: 0.6972 Epoch 3/44 1080/1080 [==============================] - 5s 5ms/step - loss: 0.3779 - acc: 0.8583 Epoch 4/44 1080/1080 [==============================] - 5s 5ms/step - loss: 0.2152 - acc: 0.9352 Epoch 5/44 1080/1080 [==============================] - 5s 5ms/step - loss: 0.1680 - acc: 0.9463 Epoch 6/44 1080/1080 [==============================] - 5s 5ms/step - loss: 0.3060 - acc: 0.9065 Epoch 7/44 1080/1080 [==============================] - 5s 5ms/step - loss: 0.3636 - acc: 0.8713 Epoch 8/44 1080/1080 [==============================] - 5s 5ms/step - loss: 0.4806 - acc: 0.8704 Epoch 9/44 1080/1080 [==============================] - 6s 5ms/step - loss: 0.5736 - acc: 0.8222 Epoch 10/44 1080/1080 [==============================] - 5s 5ms/step - loss: 0.9006 - acc: 0.8065 Epoch 11/44 1080/1080 [==============================] - 5s 5ms/step - loss: 1.2664 - acc: 0.7065 Epoch 12/44 1080/1080 [==============================] - 5s 5ms/step - loss: 1.0772 - acc: 0.7426 Epoch 13/44 1080/1080 [==============================] - 5s 5ms/step - loss: 0.6108 - acc: 0.8093 Epoch 14/44 1080/1080 [==============================] - 5s 5ms/step - loss: 0.5305 - acc: 0.8537 Epoch 15/44 1080/1080 [==============================] - 5s 5ms/step - loss: 0.3256 - acc: 0.9102 Epoch 16/44 1080/1080 [==============================] - 5s 5ms/step - loss: 0.1295 - acc: 0.9491 Epoch 17/44 1080/1080 [==============================] - 5s 5ms/step - loss: 0.0994 - acc: 0.9676 Epoch 18/44 1080/1080 [==============================] - 5s 5ms/step - loss: 0.1509 - acc: 0.9639 Epoch 19/44 1080/1080 [==============================] - 5s 5ms/step - loss: 0.2855 - acc: 0.8889 Epoch 20/44 1080/1080 [==============================] - 5s 5ms/step - loss: 0.1235 - acc: 0.9620 Epoch 21/44 1080/1080 [==============================] - 5s 5ms/step - loss: 0.0788 - acc: 0.9759 Epoch 22/44 1080/1080 [==============================] - 5s 5ms/step - loss: 0.0639 - acc: 0.9769 Epoch 23/44 1080/1080 [==============================] - 6s 5ms/step - loss: 0.0678 - acc: 0.9824 Epoch 24/44 1080/1080 [==============================] - 5s 5ms/step - loss: 0.0255 - acc: 0.9917 Epoch 25/44 1080/1080 [==============================] - 5s 5ms/step - loss: 0.0320 - acc: 0.9917 Epoch 26/44 1080/1080 [==============================] - 5s 5ms/step - loss: 0.0136 - acc: 0.9954 Epoch 27/44 1080/1080 [==============================] - 5s 5ms/step - loss: 0.0370 - acc: 0.9963 Epoch 28/44 1080/1080 [==============================] - 5s 5ms/step - loss: 0.6671 - acc: 0.8111 Epoch 29/44 1080/1080 [==============================] - 5s 5ms/step - loss: 0.5925 - acc: 0.8611 Epoch 30/44 1080/1080 [==============================] - 5s 5ms/step - loss: 0.9083 - acc: 0.8028 Epoch 31/44 1080/1080 [==============================] - 5s 5ms/step - loss: 0.7969 - acc: 0.7306 Epoch 32/44 1080/1080 [==============================] - 5s 5ms/step - loss: 0.3669 - acc: 0.8676 Epoch 33/44 1080/1080 [==============================] - 5s 5ms/step - loss: 0.2057 - acc: 0.9352 Epoch 34/44 1080/1080 [==============================] - 5s 5ms/step - loss: 0.1457 - acc: 0.9528 Epoch 35/44 1080/1080 [==============================] - 6s 5ms/step - loss: 0.1085 - acc: 0.9657 Epoch 36/44 1080/1080 [==============================] - 5s 5ms/step - loss: 0.0829 - acc: 0.9694 Epoch 37/44 1080/1080 [==============================] - 5s 5ms/step - loss: 0.1326 - acc: 0.9593 Epoch 38/44 1080/1080 [==============================] - 5s 5ms/step - loss: 0.0626 - acc: 0.9750 Epoch 39/44 1080/1080 [==============================] - 5s 5ms/step - loss: 0.0381 - acc: 0.9880 Epoch 40/44 1080/1080 [==============================] - 5s 5ms/step - loss: 0.0153 - acc: 0.9963 Epoch 41/44 1080/1080 [==============================] - 5s 5ms/step - loss: 0.1065 - acc: 0.9657 Epoch 42/44 1080/1080 [==============================] - 5s 5ms/step - loss: 0.1160 - acc: 0.9713 Epoch 43/44 1080/1080 [==============================] - 5s 5ms/step - loss: 0.0581 - acc: 0.9880 Epoch 44/44 1080/1080 [==============================] - 5s 5ms/step - loss: 0.0293 - acc: 0.9926 120/120 [==============================] - 0s 1ms/step Loss = 0.0633144895857 Test Accuracy = 0.983333333333
相关文章推荐
- 吴恩达深度学习课程deeplearning.ai课程作业:Class 1 Week 2 assignment2_1
- 吴恩达深度学习课程deeplearning.ai课程作业:Class 1 Week 3 assignment3
- 吴恩达深度学习课程deeplearning.ai课程作业:Class 2 Week 1 3.Gradient Checking
- 吴恩达深度学习课程deeplearning.ai课程作业:Class 4 Week 4 Art Generation with Neural Style Transfer
- 吴恩达深度学习课程deeplearning.ai课程作业:Class 2 Week 3 TensorFlow Tutorial
- 吴恩达深度学习课程deeplearning.ai课程作业:Class 4 Week 1 Convolutional Neural Networks: Step by Step
- 吴恩达深度学习课程deeplearning.ai课程作业:Class 1 Week 2 assignment2_2
- 吴恩达深度学习课程deeplearning.ai课程作业:Class 1 Week 4 assignment4_1
- 吴恩达深度学习课程deeplearning.ai课程作业:Class 2 Week 1 1.Initialization
- 吴恩达深度学习课程deeplearning.ai课程作业:Class 2 Week 2 Optimization methods
- 吴恩达深度学习课程deeplearning.ai课程作业:Class 4 Week 2 Keras - Tutorial - Happy House
- 吴恩达深度学习课程deeplearning.ai课程作业:Class 4 Week 3 Car detection
- 吴恩达深度学习课程deeplearning.ai课程作业:Class 2 Week 1 2.Regularization
- 吴恩达深度学习课程deeplearning.ai课程作业:Class 1 Week 4 assignment4_2
- 吴恩达深度学习课程deeplearning.ai课程作业:Class 4 Week 1 Convolution model - Application
- 吴恩达深度学习课程deeplearning.ai课程作业:Class 4 Week 4 Face Recognition for the Happy House
- 吴恩达Coursera深度学习课程 DeepLearning.ai 编程作业——Gradients_check(2-1.3)
- 吴恩达Coursera深度学习课程 DeepLearning.ai 编程作业——Initialize parameter(2-1.1)
- 吴恩达Coursera深度学习课程 DeepLearning.ai 编程作业——Optimization Methods(2-2)
- 吴恩达Coursera深度学习课程 DeepLearning.ai 编程作业——Tensorflow+tutorial(2-3)