使用训练的mnist识别自己写的数字
2017-10-18 21:09
375 查看
用我上次写的mnist1024层全连接层模型,训练了20000次,在测试集上的正确率为99.3%,这次打算写一个直接使用训练模型进行图片识别的python程序,第一步是自己手动构建个测试矩阵测试,因为第一次尝试这种东西所以遇到了好多问题。。。,现附上代码吧,测试了几下没问题
输出为7,不错不错
后面写图片转矩阵自动测试,待更新
更新:自动识别文件夹下的图片,tif格式灰度图28*28像素请自己去photoshop画
代码
除了识别不出9其他没什么问题
再次更新!!!批量识别加识别不出9问题解决
上次写的识别器识别不出9,然后去网上搜索了一下了解了mnist训练集的构成,mnist数据集里的数字图片都是经过尺寸归一化以及数字居中处理的,所以如果你写的数字太偏向某一边的话会产生问题,除非你自己也对数字进行归一化居中处理,暂时不知道他的处理方式暂时不弄,后面弄了再更新吧,附上批量处理程序
训练了三个模型一个是2000次迭代产生的模型,一个是10000次迭代产生的模型,一个是30000次迭代产生的模型,准确度经过测试确实是一个比一个高,最后一个基本可以无差错识别我写的数字
# -*- coding:gbk -*- import tensorflow as tf import numpy #添加x作为占位符 x=tf.placeholder("float", [1, 784]) #生成权重函数 def weight_variable(shape): #tf.truncated_normal(shape, mean, stddev) :shape表示生成张量的维度,mean是均值,stddev是标准差。这个函数产生正态分布,均值和标准差自己设定 #权重在初始化时应该加入少量的噪声来打破对称性以及避免0梯度 initial=tf.truncated_normal(shape, stddev=0.1) return tf.Variable(initial,dtype=tf.float32) #生成偏置函数 #由于我们使用的是ReLU神经元,因此比较好的做法是用一个较小的正数来初始化偏置项,以避免神经元节点输出恒为0的问题(dead neurons) def bias_variable(shape): initial=tf.constant(0.1, shape=shape) return tf.Variable(initial,dtype=tf.float32) #卷积函数 #卷积使用1步长,0边距的模板,池化用2x2的模板 def conv2d(x, W): #x:待卷积的矩阵具有[batch, in_height, in_width, in_channels]这样的shape #w:卷积核具有[filter_height, filter_width, in_channels, out_channels]这样的shape #strides:卷积时在图像每一维的步长,这是一个一维的向量,长度4 return tf.nn.conv2d(x, W, strides=[1, 1, 1, 1], padding='SAME') #池化函数 #和卷积基本相同 def max_pool_2x2(x): return tf.nn.max_pool(x, ksize=[1, 2, 2, 1],strides=[1, 2, 2, 1], padding='SAME') #卷积在每个5x5的patch中算出32个特征。 #卷积的权重张量形状是[5, 5, 1, 32],前两个维度是patch的大小, #接着是输出几个单位,和输出的几个维度 W_conv1=weight_variable([5, 5, 1, 32]) b_conv1=bias_variable([32]) #shape:[batch, in_height, in_width, in_channels] x_image=tf.reshape(x, [-1,28,28,1]) #卷积+偏置,然后给relu激活函数,最后激活函数返回值池化 h_conv1=tf.nn.relu(conv2d(x_image, W_conv1) + b_conv1) #output size 28*28*32 h_pool1=max_pool_2x2(h_conv1) #output size 14*14*32 #第二层卷积,池化 W_conv2=weight_variable([5, 5, 32, 64]) b_conv2=bias_variable([64]) h_conv2=tf.nn.relu(conv2d(h_pool1, W_conv2) + b_conv2) #output size 14*14*64 h_pool2=max_pool_2x2(h_conv2) #output size 7*7*64 #全连接层1 W_fc1=weight_variable([7*7*64,1024]) b_fc1=bias_variable([1024]) h_pool2_flat=tf.reshape(h_pool2, [-1,7*7*64]) h_fc1=tf.nn.relu(tf.matmul(h_pool2_flat,W_fc1) + b_fc1) #全连接层3 W_fc2=weight_variable([1024, 10]) b_fc2=bias_variable([10]) y_conv=tf.nn.softmax(tf.matmul(h_fc1, W_fc2) + b_fc2) prediction=tf.argmax(y_conv,1) #初始化数据读取模型 sess = tf.InteractiveSession() saver=tf.train.Saver() sess.run(tf.global_variables_initializer()) saver.restore(sess,"my_net/save_net.ckpt") imgmatrix=numpy.array([ 0.,0.,0.,0.,1.,1.,1.,1.,1.,1.,1.,1.,1.,0.,0.,0.,0.,0.,0.,0.,0.,0.,0.,0.,0.,0.,0.,0., 0.,0.,0.,0.,1.,1.,1.,1.,1.,1.,1.,1.,1.,0.,0.,0.,0.,0.,0.,0.,0.,0.,0.,0.,0.,0.,0.,0., 0.,0.,0.,0.,1.,1.,1.,1.,1.,1.,1.,1.,1.,0.,0.,0.,0.,0.,0.,0.,0.,0.,0.,0.,0.,0.,0.,0., 0.,0.,0.,0.,1.,1.,1.,1.,1.,1.,1.,1.,1.,0.,0.,0.,0.,0.,0.,0.,0.,0.,0.,0.,0.,0.,0.,0., 0.,0.,0.,0.,1.,1.,1.,1.,1.,1.,1.,1.,1.,0.,0.,0.,0.,0.,0.,0.,0.,0.,0.,0.,0.,0.,0.,0., 0.,0.,0.,0.,1.,1.,1.,1.,1.,1.,1.,1.,1.,0.,0.,0.,0.,0.,0.,0.,0.,0.,0.,0.,0.,0.,0.,0., 0.,0.,0.,0.,0.,0.,0.,0.,0.,0.,1.,1.,1.,0.,0.,0.,0.,0.,0.,0.,0.,0.,0.,0.,0.,0.,0.,0., 0.,0.,0.,0.,0.,0.,0.,0.,0.,0.,1.,1.,1.,0.,0.,0.,0.,0.,0.,0.,0.,0.,0.,0.,0.,0.,0.,0., 0.,0.,0.,0.,0.,0.,0.,0.,0.,0.,1.,1.,1.,0.,0.,0.,0.,0.,0.,0.,0.,0.,0.,0.,0.,0.,0.,0., 0.,0.,0.,0.,0.,0.,0.,0.,0.,0.,1.,1.,1.,0.,0.,0.,0.,0.,0.,0.,0.,0.,0.,0.,0.,0.,0.,0., 0.,0.,0.,0.,0.,0.,0.,0.,0.,0.,1.,1.,1.,0.,0.,0.,0.,0.,0.,0.,0.,0.,0.,0.,0.,0.,0.,0., 0.,0.,0.,0.,0.,0.,0.,0.,0.,0.,1.,1.,1.,0.,0.,0.,0.,0.,0.,0.,0.,0.,0.,0.,0.,0.,0.,0., 0.,0.,0.,0.,0.,0.,0.,0.,0.,0.,1.,1.,1.,0.,0.,0.,0.,0.,0.,0.,0.,0.,0.,0.,0.,0.,0.,0., 0.,0.,0.,0.,0.,0.,0.,0.,0.,0.,1.,1.,1.,0.,0.,0.,0.,0.,0.,0.,0.,0.,0.,0.,0.,0.,0.,0., 0.,0.,0.,0.,0.,0.,0.,0.,0.,0.,1.,1.,1.,0.,0.,0.,0.,0.,0.,0.,0.,0.,0.,0.,0.,0.,0.,0., 0.,0.,0.,0.,0.,0.,0.,0.,0.,0.,1.,1.,1.,0.,0.,0.,0.,0.,0.,0.,0.,0.,0.,0.,0.,0.,0.,0., 0.,0.,0.,0.,0.,0.,0.,0.,0.,0.,1.,1.,1.,0.,0.,0.,0.,0.,0.,0.,0.,0.,0.,0.,0.,0.,0.,0., 0.,0.,0.,0.,0.,0.,0.,0.,0.,0.,1.,1.,1.,0.,0.,0.,0.,0.,0.,0.,0.,0.,0.,0.,0.,0.,0.,0., 0.,0.,0.,0.,0.,0.,0.,0.,0.,0.,1.,1.,1.,0.,0.,0.,0.,0.,0.,0.,0.,0.,0.,0.,0.,0.,0.,0., 0.,0.,0.,0.,0.,0.,0.,0.,0.,0.,1.,1.,1.,0.,0.,0.,0.,0.,0.,0.,0.,0.,0.,0.,0.,0.,0.,0., 0.,0.,0.,0.,0.,0.,0.,0.,0.,0.,1.,1.,1.,0.,0.,0.,0.,0.,0.,0.,0.,0.,0.,0.,0.,0.,0.,0., 0.,0.,0.,0.,0.,0.,0.,0.,0.,0.,1.,1.,1.,0.,0.,0.,0.,0.,0.,0.,0.,0.,0.,0.,0.,0.,0.,0., 0.,0.,0.,0.,0.,0.,0.,0.,0.,0.,1.,1.,1.,0.,0.,0.,0.,0.,0.,0.,0.,0.,0.,0.,0.,0.,0.,0., 0.,0.,0.,0.,0.,0.,0.,0.,0.,0.,1.,1.,1.,0.,0.,0.,0.,0.,0.,0.,0.,0.,0.,0.,0.,0.,0.,0., 0.,0.,0.,0.,0.,0.,0.,0.,0.,0.,1.,1.,1.,0.,0.,0.,0.,0.,0.,0.,0.,0.,0.,0.,0.,0.,0.,0., 0.,0.,0.,0.,0.,0.,0.,0.,0.,0.,1.,1.,1.,0.,0.,0.,0.,0.,0.,0.,0.,0.,0.,0.,0.,0.,0.,0., 0.,0.,0.,0.,0.,0.,0.,0.,0.,0.,1.,1.,1.,0.,0.,0.,0.,0.,0.,0.,0.,0.,0.,0.,0.,0.,0.,0., 0.,0.,0.,0.,0.,0.,0.,0.,0.,0.,1.,1.,1.,0.,0.,0.,0.,0.,0.,0.,0.,0.,0.,0.,0.,0.,0.,0., ]).reshape(1,784) result=sess.run(prediction,feed_dict={x:imgmatrix}) print("The reslut is",result)
输出为7,不错不错
后面写图片转矩阵自动测试,待更新
遇到的问题 1.!!!!!!!!!不能同tf.zeros生成数据放到哪里,反正各种有问题 2.!!!!!!!!!numpy.array(784)后还需要.reshape(1,784) 3.去除所有训练阶段的代码,精简优化
更新:自动识别文件夹下的图片,tif格式灰度图28*28像素请自己去photoshop画
代码
# -*- coding:gbk -*- import tensorflow as tf import numpy from PIL import Image #添加x作为占位符 x=tf.placeholder("float", [1, 784]) #生成权重函数 def weight_variable(shape): #tf.truncated_normal(shape, mean, stddev) :shape表示生成张量的维度,mean是均值,stddev是标准差。这个函数产生正态分布,均值和标准差自己设定 #权重在初始化时应该加入少量的噪声来打破对称性以及避免0梯度 initial=tf.truncated_normal(shape, stddev=0.1) return tf.Variable(initial,dtype=tf.float32) #生成偏置函数 #由于我们使用的是ReLU神经元,因此比较好的做法是用一个较小的正数来初始化偏置项,以避免神经元节点输出恒为0的问题(dead neurons) def bias_variable(shape): initial=tf.constant(0.1, shape=shape) return tf.Variable(initial,dtype=tf.float32) #卷积函数 #卷积使用1步长,0边距的模板,池化用2x2的模板 def conv2d(x, W): #x:待卷积的矩阵具有[batch, in_height, in_width, in_channels]这样的shape #w:卷积核具有[filter_height, filter_width, in_channels, out_channels]这样的shape #strides:卷积时在图像每一维的步长,这是一个一维的向量,长度4 return tf.nn.conv2d(x, W, strides=[1, 1, 1, 1], padding='SAME') #池化函数 #和卷积基本相同 def max_pool_2x2(x): return tf.nn.max_pool(x, ksize=[1, 2, 2, 1],strides=[1, 2, 2, 1], padding='SAME') #卷积在每个5x5的patch中算出32个特征。 #卷积的权重张量形状是[5, 5, 1, 32],前两个维度是patch的大小, #接着是输出几个单位,和输出的几个维度 W_conv1=weight_variable([5, 5, 1, 32]) b_conv1=bias_variable([32]) #shape:[batch, in_height, in_width, in_channels] x_image=tf.reshape(x, [-1,28,28,1]) #卷积+偏置,然后给relu激活函数,最后激活函数返回值池化 h_conv1=tf.nn.relu(conv2d(x_image, W_conv1) + b_conv1) #output size 28*28*32 h_pool1=max_pool_2x2(h_conv1) #output size 14*14*32 #第二层卷积,池化 W_conv2=weight_variable([5, 5, 32, 64]) b_conv2=bias_variable([64]) h_conv2=tf.nn.relu(conv2d(h_pool1, W_conv2) + b_conv2) #output size 14*14*64 h_pool2=max_pool_2x2(h_conv2) #output size 7*7*64 #全连接层1 W_fc1=weight_variable([7*7*64,1024]) b_fc1=bias_variable([1024]) h_pool2_flat=tf.reshape(h_pool2, [-1,7*7*64]) h_fc1=tf.nn.relu(tf.matmul(h_pool2_flat,W_fc1) + b_fc1) #全连接层3 W_fc2=weight_variable([1024, 10]) b_fc2=bias_variable([10]) y_conv=tf.nn.softmax(tf.matmul(h_fc1, W_fc2) + b_fc2) prediction=tf.argmax(y_conv,1) #初始化数据读取模型 sess = tf.InteractiveSession() saver=tf.train.Saver() sess.run(tf.global_variables_initializer()) saver.restore(sess,"my_net/save_net.ckpt") img=Image.open('.//image//1.tif') imgmatrix=numpy.array(img).reshape(1,784) result=sess.run(prediction,feed_dict={x:imgmatrix}) print("The reslut is",result)
除了识别不出9其他没什么问题
再次更新!!!批量识别加识别不出9问题解决
上次写的识别器识别不出9,然后去网上搜索了一下了解了mnist训练集的构成,mnist数据集里的数字图片都是经过尺寸归一化以及数字居中处理的,所以如果你写的数字太偏向某一边的话会产生问题,除非你自己也对数字进行归一化居中处理,暂时不知道他的处理方式暂时不弄,后面弄了再更新吧,附上批量处理程序
训练了三个模型一个是2000次迭代产生的模型,一个是10000次迭代产生的模型,一个是30000次迭代产生的模型,准确度经过测试确实是一个比一个高,最后一个基本可以无差错识别我写的数字
# -*- coding:gbk -*- import tensorflow as tf import numpy from PIL import Image #添加x作为占位符 x=tf.placeholder("float", [1, 784]) #生成权重函数 def weight_variable(shape): #tf.truncated_normal(shape, mean, stddev) :shape表示生成张量的维度,mean是均值,stddev是标准差。这个函数产生正态分布,均值和标准差自己设定 #权重在初始化时应该加入少量的噪声来打破对称性以及避免0梯度 initial=tf.truncated_normal(shape, stddev=0.1) return tf.Variable(initial,dtype=tf.float32) #生成偏置函数 #由于我们使用的是ReLU神经元,因此比较好的做法是用一个较小的正数来初始化偏置项,以避免神经元节点输出恒为0的问题(dead neurons) def bias_variable(shape): initial=tf.constant(0.1, shape=shape) return tf.Variable(initial,dtype=tf.float32) #卷积函数 #卷积使用1步长,0边距的模板,池化用2x2的模板 def conv2d(x, W): #x:待卷积的矩阵具有[batch, in_height, in_width, in_channels]这样的shape #w:卷积核具有[filter_height, filter_width, in_channels, out_channels]这样的shape #strides:卷积时在图像每一维的步长,这是一个一维的向量,长度4 return tf.nn.conv2d(x, W, strides=[1, 1, 1, 1], padding='SAME') #池化函数 #和卷积基本相同 def max_pool_2x2(x): return tf.nn.max_pool(x, ksize=[1, 2, 2, 1],strides=[1, 2, 2, 1], padding='SAME') #卷积在每个5x5的patch中算出32个特征。 #卷积的权重张量形状是[5, 5, 1, 32],前两个维度是patch的大小, #接着是输出几个单位,和输出的几个维度 W_conv1=weight_variable([5, 5, 1, 32]) b_conv1=bias_variable([32]) #shape:[batch, in_height, in_width, in_channels] x_image=tf.reshape(x, [-1,28,28,1]) #卷积+偏置,然后给relu激活函数,最后激活函数返回值池化 h_conv1=tf.nn.relu(conv2d(x_image, W_conv1) + b_conv1) #output size 28*28*32 h_pool1=max_pool_2x2(h_conv1) #output size 14*14*32 #第二层卷积,池化 W_conv2=weight_variable([5, 5, 32, 64]) b_conv2=bias_variable([64]) h_conv2=tf.nn.relu(conv2d(h_pool1, W_conv2) + b_conv2) #output size 14*14*64 h_pool2=max_pool_2x2(h_conv2) #output size 7*7*64 #全连接层1 W_fc1=weight_variable([7*7*64,1024]) b_fc1=bias_variable([1024]) h_pool2_flat=tf.reshape(h_pool2, [-1,7*7*64]) h_fc1=tf.nn.relu(tf.matmul(h_pool2_flat,W_fc1) + b_fc1) #全连接层3 W_fc2=weight_variable([1024, 10]) b_fc2=bias_variable([10]) y_conv=tf.nn.softmax(tf.matmul(h_fc1, W_fc2) + b_fc2) prediction=tf.argmax(y_conv,1) #初始化数据读取模型 sess = tf.InteractiveSession() saver=tf.train.Saver() sess.run(tf.global_variables_initializer()) saver.restore(sess,"30000/save_net.ckpt") imgmatrix=[] for i in range(10): img=Image.open('.//image//'+str(i)+'.tif') imgmatrix.append(numpy.array(img).reshape(1,784)) for i in range(10): result=sess.run(prediction,feed_dict={x:imgmatrix[i]}) print(i,"The reslut is",result)
相关文章推荐
- 使用Tensorflow和MNIST识别自己手写的数字
- 使用Tensorflow和MNIST识别自己手写的数字
- 勉强算升2级吧----用mnist训练好的model对自己手写的数字进行分类识别
- Caffe学习总结(四)——使用mnist训练模型识别一张手写数字图像
- Tensorflow , MNIST 识别你自己手写的数字
- TensorFlow - 手写数字识别 (模型训练完成后的使用)
- 使用TensorFlow训练神经网络识别MNIST数据代码
- 使用OPENCV训练手写数字识别分类器
- caffe中如何训练自己的手写数字识别系统?
- TensorFlow使用object detection训练自己的模型用于物体识别
- 【深度学习】笔记3_caffe自带的第一个例子,Mnist手写数字识别所使用的LeNet网络模型的详细解释
- 使用PCA + KNN对MNIST数据集进行手写数字识别 python
- 使用OPENCV训练手写数字识别分类器
- 使用caffemodel模型(由mnist训练)测试单张手写数字样本
- Tensorflow的Helloword:使用简单Softmax Regression模型来识别Mnist手写数字
- 使用OPENCV训练手写数字识别分类器
- FAIR开源目标识别平台Detectron从入门到放弃(二) 使用自己的数据集(voc2007格式)训练Detectron
- 训练Tensorflow识别手写数字 mnist
- TensorFlow用MNIST训练的模型来识别手写数字
- mnist训练的cnn模型测试自己的手写数字