Well i am using DNNregressor of  tensorflow to train a neural network with the gradient descent algorithm. This training is performed with a very low learning rate (learning_rate=0.001). But at the end of the training I observed that the optimization is not converging (the value of the cost function oscillate):

The code is:

regressor = tf.contrib.learn.DNNRegressor( feature_columns=feature_columns, optimizer=tf.train.GradientDescentOptimizer( learning_rate=learning_rate ),

hidden_units= [10,20,10], activation_fn=tf.nn.relu, model_dir="/home/edwin/workspace/tensorFlowPy/eval")

The ouput is:

INFO:tensorflow:Step 46701: loss = 0.00337208

INFO:tensorflow:Step 46801: loss = 0.00339828

INFO:tensorflow:Step 46901: loss = 0.000426068

INFO:tensorflow:Step 47001: loss = 0.000130174

INFO:tensorflow:Step 47101: loss = 2.59336e-05

INFO:tensorflow:Step 47201: loss = 0.0038842

INFO:tensorflow:Step 47301: loss = 0.000187496

INFO:tensorflow:Step 47401: loss = 0.00407278

INFO:tensorflow:Step 47501: loss = 0.00102294

INFO:tensorflow:Step 47601: loss = 0.00379042

INFO:tensorflow:Step 47701: loss = 0.000140399

INFO:tensorflow:Step 47801: loss = 0.00254074

INFO:tensorflow:Step 47901: loss = 0.00409194

Using gradient descend algorithm, in the worst case i expect that the training will stay in the best value of the cost function.

Thank you very much for your comments.

Similar questions and discussions