You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
File "train.py", line 122, in main
model_update(model, step, G_loss, optG_fs2)
File "train.py", line 77, in model_update
loss = (loss / grad_acc_step).backward()
File "C:\Users\12604\Anaconda3\envs\pytorch\lib\site-packages\torch\tensor.py", line 221, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph)
File "C:\Users\12604\Anaconda3\envs\pytorch\lib\site-packages\torch\autograd_init_.py", line 132, in backward
allow_unreachable=True) # allow_unreachable flag
RuntimeError: Found dtype Long but expected Float
Hi@keonlee9420. This problem occurs when the loss function is back-propagating, how can I solve it?
This is the dtype of loss
The text was updated successfully, but these errors were encountered:
Hi@keonlee9420, this problem has been solved!
The previously passed parameters are also of type float when I debug them, but I don't know why this problem occurs.
I try to add the loss = loss.type(torch.FloatTensor), then it can pass successfully
File "train.py", line 122, in main
model_update(model, step, G_loss, optG_fs2)
File "train.py", line 77, in model_update
loss = (loss / grad_acc_step).backward()
File "C:\Users\12604\Anaconda3\envs\pytorch\lib\site-packages\torch\tensor.py", line 221, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph)
File "C:\Users\12604\Anaconda3\envs\pytorch\lib\site-packages\torch\autograd_init_.py", line 132, in backward
allow_unreachable=True) # allow_unreachable flag
RuntimeError: Found dtype Long but expected Float
Hi@keonlee9420. This problem occurs when the loss function is back-propagating, how can I solve it?
This is the dtype of loss
The text was updated successfully, but these errors were encountered: