Skip to content Skip to sidebar Skip to footer

Python/tensorflow - Is It Normal To Have All The Accuracy Values Of "1" In This Case?

I have the following binary file which consists of labels, filenames, and data (i.e. pixels): [array([2, 1, 0, 2, 1, 0, 2, 1, 0, 2, 1, 0, 2, 1, 0, 2, 1, 0, 2, 1, 0, 2, 1, 0,

Solution 1:

The value of batch_size in your code is 1, so each time you run

sess.run([train_op, accuracy], feed_dict={x: batch_data, y: batch_onehot_vals})

you examine only one picture. Then you have the following two lines:

correct_pred = tf.equal(tf.argmax(model_op, 1), tf.argmax(y,1))
accuracy = tf.reduce_mean(tf.cast(correct_pred,tf.float32))

So besically, correct_pred is just 1 number (0 or 1) because it is based on one picture (so if tf.argmax(model_op, 1)=tf.argmax(y,1) then correct_pred=1. Otherwise, it equals 0).

Then accuracy just equals to this number. Therefore its value is always 0 or 1.

Response to Edit 1: Numerically, the values make sense. Your batch_size=15, so the accuracy should be some integer multiple of 1/15=0.0667, which is indeed seems to be the case in all the values in your table. The reason that it should be an integer multiple of 1/15 is because of these two lines:

correct_pred = tf.equal(tf.argmax(model_op, 1), tf.argmax(y,1))
accuracy = tf.reduce_mean(tf.cast(correct_pred,tf.float32))

The array correct_pred is just a 0-1 vector (because it is a result of tf.equal). accuracy is just the sum of the values in correct_pred, divided by batch size which is 15.

Regarding optimal batch size, it depends on many factors. You can read more about that in the discussion here for example.

Post a Comment for "Python/tensorflow - Is It Normal To Have All The Accuracy Values Of "1" In This Case?"