Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG] Device check in DQN always takes first branch #1389

Closed
matteobettini opened this issue Jul 17, 2023 · 1 comment · Fixed by #1395
Closed

[BUG] Device check in DQN always takes first branch #1389

matteobettini opened this issue Jul 17, 2023 · 1 comment · Fixed by #1395
Assignees
Labels
bug Something isn't working

Comments

@matteobettini
Copy link
Contributor

in the DQN forward we have this code to check the device

  if self.device is not None:
      warnings.warn(
          "The use of a device for the objective function will soon be deprecated",
          category=DeprecationWarning,
      )
      device = self.device
  else:
      device = tensordict.device

The problem is that loss.device is

  @property
  def device(self) -> torch.device:
      for p in self.parameters():
          return p.device
      return torch.device("cpu")

which will never return None, so the first branch is always taken and the warning always triggered.

So this logic has never made any sense.

How should we sort this out?

@matteobettini matteobettini added the bug Something isn't working label Jul 17, 2023
@vmoens
Copy link
Contributor

vmoens commented Jul 18, 2023

So this logic has never made any sense.

Not sure about the "never" but I agree that we should fix that
We should simply not check the device.
If you give a tensor on the wrong device to torch.nn.Linear, Linear does not care. It's only when the op is execute that things will break. We should just remove all device checks from losses.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants