TF/03_Linear_Regression/03_TensorFlow_Way_of_Linear_Regression
2017-05-26 09:37
288 查看
03 Learning the TensorFlow Way of Regression
In this section we will implement linear regression as an iterative computational graph in TensorFlow. To make this more pertinent, instead of using generated data, we will instead use the Iris data set. Our x will be the Petal Width, our y will be the Sepal Length. Viewing the data in these two dimensions suggests a linear relationship.Model
The the output of our model is a 2D linear regression:y = A * x + b
The x matrix input will be a 2D matrix, where it’s dimensions will be (batch size x 1). The y target output will have the same dimensions, (batch size x 1).
The loss function we will use will be the mean of the batch L2 Loss:
loss = mean( (y_target - model_output)^2 )
We will then iterate through random batch size selections of the data.
Graph of Loss Function
Graph of Linear Fit
03_lin_reg_tensorflow_way.py
# Linear Regression: TensorFlow Way #---------------------------------- # # This function shows how to use TensorFlow to # solve linear regression. # y = Ax + b # # We will use the iris data, specifically: # y = Sepal Length # x = Petal Width import matplotlib.pyplot as plt import numpy as np import tensorflow as tf from sklearn import datasets from tensorflow.python.framework import ops ops.reset_default_graph() # Create graph #修改位置 config = tf.ConfigProto(allow_soft_placement= True, log_device_placement= True) sess = tf.Session(config = config) # Create graph #sess = tf.Session() # Load the data # iris.data = [(Sepal Length, Sepal Width, Petal Length, Petal Width)] iris = datasets.load_iris() x_vals = np.array([x[3] for x in iris.data]) y_vals = np.array([y[0] for y in iris.data]) # Declare batch size batch_size = 25 # Initialize placeholders x_data = tf.placeholder(shape=[None, 1], dtype=tf.float32) y_target = tf.placeholder(shape=[None, 1], dtype=tf.float32) # Create variables for linear regression A = tf.Variable(tf.random_normal(shape=[1,1])) b = tf.Variable(tf.random_normal(shape=[1,1])) # Declare model operations model_output = tf.add(tf.matmul(x_data, A), b) # Declare loss function (L2 loss) loss = tf.reduce_mean(tf.square(y_target - model_output)) # Declare optimizer my_opt = tf.train.GradientDescentOptimizer(0.05) train_step = my_opt.minimize(loss) # Initialize variables init = tf.global_variables_initializer() sess.run(init) # Training loop loss_vec = [] for i in range(100): rand_index = np.random.choice(len(x_vals), size=batch_size) rand_x = np.transpose([x_vals[rand_index]]) rand_y = np.transpose([y_vals[rand_index]]) sess.run(train_step, feed_dict={x_data: rand_x, y_target: rand_y}) temp_loss = sess.run(loss, feed_dict={x_data: rand_x, y_target: rand_y}) loss_vec.append(temp_loss) if (i+1)%25==0: print('Step #' + str(i+1) + ' A = ' + str(sess.run(A)) + ' b = ' + str(sess.run(b))) print('Loss = ' + str(temp_loss)) # Get the optimal coefficients [slope] = sess.run(A) [y_intercept] = sess.run(b) # Get best fit line best_fit = [] for i in x_vals: best_fit.append(slope*i+y_intercept) # Plot the result plt.plot(x_vals, y_vals, 'o', label='Data Points') plt.plot(x_vals, best_fit, 'r-', label='Best fit line', linewidth=3) plt.legend(loc='upper left') plt.title('Sepal Length vs Pedal Width') plt.xlabel('Pedal Width') plt.ylabel('Sepal Length') plt.show() # Plot loss over time plt.plot(loss_vec, 'k-') plt.title('L2 Loss per Generation') plt.xlabel('Generation') plt.ylabel('L2 Loss') plt.show()
报错
InternalError: Blas SGEMM launch failed : a.shape=(25, 1), b.shape=(1, 1), m=25, n=1, k=1 [[Node: MatMul = MatMul[T=DT_FLOAT, transpose_a=false, transpose_b=false, _device="/job:localhost/replica:0/task:0/gpu:0"](_recv_Placeholder_0/_7, Variable/read)]]
修改
config = tf.ConfigProto(allow_soft_placement= True, log_device_placement= True) tf.Session(config= config)
Step #25 A = [[ 2.60703278]] b = [[ 2.23166347]] Loss = 2.57998 Step #50 A = [[ 1.82236218]] b = [[ 3.23257184]] Loss = 0.928969 Step #75 A = [[ 1.50014949]] b = [[ 3.94962335]] Loss = 0.445195 Step #100 A = [[ 1.26219547]] b = [[ 4.26761293]] Loss = 0.405525
相关文章推荐
- [ML of Andrew Ng]Week 2 : Linear Regression with Multiple Variables and Normal Equation
- 【转】Derivation of the Normal Equation for linear regression
- Linear regression of multiple features in Tensorflow
- 线性回归(linear_regression),多项式回归(polynomial regression)(Tensorflow实现)
- Tensorflow笔记(二)—— Linear Regression Example
- 由浅入深之Tensorflow(1)----linear_regression实现
- 再遇样条,读论文Color Lens Shade Compensation achieved by Linear Regression of Piece-Wise Bilinear Spline
- [ML of Andrew Ng]Week 1 : Linear Regression with One Variable
- TensorFlow简单实例(一):linear_regression
- Regression(3)-------Linear Regression with multiple variables
- |Make Positive Thinking Your Way of Thinking_2694
- Stanford机器学习---第二讲. 多变量线性回归 Linear Regression with multiple variable
- Whats the best way to split an array in ruby into multiple smaller arrays of random size
- 机器学习 Machine Learning(by Andrew Ng)----第二章 单变量线性回归(Linear Regression with One Variable)
- TensorFlow笔记(一)---Softmax Regression识别手写数字
- Machine Learning -- Linear Regression with Multiple Variables(Andrew Ng)
- 第三章:linear models for regression
- Stanford ML - Linear regression with multiple variables 多变量线性回归
- Find a way out of the ClassLoader maze (2)
- 用TensorFlow的Softmax Regression进行手写数字识别