Skip to content Skip to sidebar Skip to footer

Regarding Setting The Global Step Information In Mini-batch Optimization

In the MNIST example, the optimizer is setup as follows # Optimizer: set up a variable that's incremented once per batch and # controls the learning rate decay. batch = tf.Variab

Solution 1:

From the code you have linked, batchis the global step. Its value is updated by the optimizer. The learning node takes it as input.

The naming may be an issue. batch merely means the number of the current batch used for training (of size BATCH_SIZE). Perhaps a better name could have been step or even global_step.

Most of the global_step code seems to be in a single source file. It is quite short and perhaps a good way to see how the pieces work together.

Post a Comment for "Regarding Setting The Global Step Information In Mini-batch Optimization"