Skip to content Skip to sidebar Skip to footer

Tensorflow Logits And Labels Must Be Same Size

I'm quite new to tensorflow and python, and currently trying to modify the MNIST for expert tutorial for a 240x320x3 image. I have 2 .py script tfrecord_reeader.py import tensorfl

Solution 1:

Most likely your labels are single integer values rather than one-hot vectors, so your labelBatch is a vector of size [50] containing single numbers like "1" or "4". Now, when you reshape them using train_labels = np.reshape(train_labels, (-1, NUM_CLASSES)) you're changing the shape to [10, 5].

The tf.nn.softmax_cross_entropy_with_logits function expects labels to be "one-hot" encodings of the labels (this means that a label of 3 translates into a vector of size 5 with a 1 in position 3 and zeros elsewhere). You can achieve this using the tf.nn.one_hot function, but an easier way to do it is instead to use the tf.nn.sparse_softmax_cross_entropy_with_logits function which is designed to work with these single-valued labels. To achieve this, you'll need to change these line:

y_ = tf.placeholder(tf.float32, [None]) # Desired output

cross_entropy = tf.reduce_mean( tf.nn.sparse_softmax_cross_entropy_with_logits(labels=y_, logits=y_conv))

And get rid of the train_labels = np.reshape(train_labels, (-1, NUM_CLASSES)) line.

(By the way, you don't actually need to use placeholders when reading data in this way - you can just directly use the output tensors.)

Post a Comment for "Tensorflow Logits And Labels Must Be Same Size"