TF之LiR:基于tensorflow实现机器学习之线性回归算法

TF之LiR:基于tensorflow实现机器学习之线性回归算法


输出结果

代码设计

# -*- coding: utf-8 -*-

#TF之LiR:基于tensorflow实现机器学习之线性回归算法
import tensorflow as tf
import numpy
import matplotlib.pyplot as plt

rng =numpy.random

#参数设定
learning_rate=0.01
training_epochs=10000
display_step=50        #每隔50次迭代输出一次
#训练数据
train_X=numpy.asarray([……])
train_Y=numpy.asarray([……])
n_samples=train_X.shape[0]
print("train_X:",train_X)
print("train_Y:",train_Y)  

#设置placeholder
X=tf.placeholder("float")
Y=tf.placeholder("float")

#设置模型的权重和偏置,因为是不断更新的所以采用Variable定义
W=tf.Variable(rng.randn(),name="weight")
b=tf.Variable(rng.randn(),name="bias")

#设置线性回归方程LiR:w*x+b
pred=tf.add(tf.multiply(X,W),b)
cost=tf.reduce_sum(tf.pow(pred-Y,2))/(2*n_samples)  #设置cost为均方差即reduce_sum函数
optimizer=tf.train.GradientDescentOptimizer(learning_rate).minimize(cost) #梯度下降,minimize函数默认下自动修正w和b 

init=tf.global_variables_initializer() #在session运算时初始化所有变量
#开始训练
with tf.Session() as sess:
    sess.run(init)                        #运行一下初始化的变量
    for epoch in range(training_epochs):  #输入所有训练数据
        for(x,y) in zip(train_X,train_Y):
            sess.run(optimizer,feed_dict={X:x,Y:y})

            #打印出每次迭代的log日志,每隔50个打印一次
            if (epoch+1) % display_step ==0:
                c=sess.run(cost,feed_dict={X:train_X,Y:train_Y})
                print("迭代次数Epoch:","%04d" % (epoch+1),"下降值cost=","{:.9f}".format(c),
                      "W=",sess.run(W),"b=",sess.run(b))
    print("Optimizer Finished!")
    training_cost=sess.run(cost,feed_dict={X:train_X,Y:train_Y})
    print("Training cost=",training_cost,"W=",sess.run(W),"b=",sess.run(b))
    #绘图
    plt.rcParams['font.sans-serif']=['SimHei']
    plt.subplot(121)
    plt.plot(train_X, train_Y, 'ro', label='Original data')
    plt.plot(train_X, sess.run(W) * train_X + sess.run(b), label='Fitted line')
    plt.legend()
    plt.title("TF之LiR:Original data")

    #测试样本
    test_X = numpy.asarray([6.83, 4.668, 8.9, 7.91, 5.7, 8.7, 3.1, 2.1])
    test_Y = numpy.asarray([1.84, 2.273, 3.2, 2.831,2.92, 3.24, 1.35, 1.03])
    print("Testing... (Mean square loss Comparison)")
    testing_cost = sess.run(tf.reduce_sum(tf.pow(pred - Y, 2)) / (2 * test_X.shape[0]),
                            feed_dict={X:test_X,Y:test_Y}) # same function as cost above
    print("Testing cost=", testing_cost)
    print("Absolute mean square loss difference:", abs( training_cost - testing_cost))
    #绘图
    plt.subplot(122)
    plt.plot(test_X, test_Y, 'bo', label='Testing data')
    plt.plot(train_X, sess.run(W) * train_X + sess.run(b), label='Fitted line')
    plt.legend()
    plt.title("TF之LiR:Testing data")
    plt.show()
   
迭代次数Epoch: 6300 下降值cost= 0.076938324 W= 0.25199208 b= 0.8008495
……
迭代次数Epoch: 10000 下降值cost= 0.076965131 W= 0.24998894 b= 0.80145526
迭代次数Epoch: 10000 下降值cost= 0.076942705 W= 0.25047526 b= 0.80151606
迭代次数Epoch: 10000 下降值cost= 0.076929517 W= 0.25114807 b= 0.801635
迭代次数Epoch: 10000 下降值cost= 0.076958008 W= 0.25011322 b= 0.8015234
迭代次数Epoch: 10000 下降值cost= 0.076990739 W= 0.24960834 b= 0.80136055
Optimizer Finished!
Training cost= 0.07699074 W= 0.24960834 b= 0.80136055
Testing... (Mean square loss Comparison)
Testing cost= 0.07910849
Absolute mean square loss difference: 0.002117753
(0)

相关推荐