Skip to content Skip to sidebar Skip to footer

Cnn In Pytorch "expected 4-dimensional Input For 4-dimensional Weight [32, 1, 5, 5], But Got 3-dimensional Input Of Size [16, 64, 64] Instead"

I am new to pytorch. I am trying to use chinese mnist dataset to train the neural network that shows in below code. Is that a problem of the neural network input or something else

Solution 1:

Your training images are greyscale images. That is, they only have one channel (as opposed to the three RGB color channels in color images).
It seems like your Dataset (implicitly) "squeezes" this singleton dimension, and instead of having a batch of shape BxCxHxW = 16x1x64x64, you end up with a batch of shape 16x64x64.
Try:

# ...
batch_inputs, batch_labels = data[0][:].to(device).type(torch.float), data[1][:].to(device)
batch_inputs = batch_inputs[:, None, ...]  # explicitly add the singleton channel dimension
# Run the forward pass
# ...

Post a Comment for "Cnn In Pytorch "expected 4-dimensional Input For 4-dimensional Weight [32, 1, 5, 5], But Got 3-dimensional Input Of Size [16, 64, 64] Instead""