Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unhelpful error message when setting invalid parameter with set_params #925

Closed
githubnemo opened this issue Dec 20, 2022 · 1 comment
Closed

Comments

@githubnemo
Copy link
Collaborator

Minimal example:

net.set_params(this_does_not_exist=True)

Expected result:

Invalid parameter 'this_does_not_exist' for estimator ...
Valid parameters are: ['module', 'criterion', ...]

Actual result:

Invalid parameter 'this_does_not_exist' for estimator ...
Valid parameters are: <generator object NeuralNet._get_param_names.<locals>.<genexpr> at 0x7f1cf9d1da10>.

Exception raising code of sklearn (version 1.2.0) looks like this:

    203             if key not in valid_params:
    204                 local_valid_params = self._get_param_names()
--> 205                 raise ValueError(
    206                     f"Invalid parameter {key!r} for estimator {self}. "
    207                     f"Valid parameters are: {local_valid_params!r}."

The fix is probably just to convert the generator expression to a list comprehension.

githubnemo pushed a commit to githubnemo/skorch that referenced this issue Dec 20, 2022
Changing the `_get_param_names` method to return a list instead of a
generator to fix the exception error message when passing unknown
parameters to `set_params`. Before the error message just included
the generator `repr`-string as the list of possible parameters.
Now the string contains the possible parameter names instead.
BenjaminBossan pushed a commit that referenced this issue May 8, 2023
Changing the `_get_param_names` method to return a list instead of a
generator to fix the exception error message when passing unknown
parameters to `set_params`. Before the error message just included
the generator `repr`-string as the list of possible parameters.
Now the string contains the possible parameter names instead.
@BenjaminBossan
Copy link
Collaborator

Closed via #926

BenjaminBossan added a commit that referenced this issue May 17, 2023
Preparation for release of version 0.13.0

Release text:

The new skorch release is here and it has some changes that will be exiting for
some users.

- First of all, you may have heard of the [PyTorch 2.0
  release](https://pytorch.org/get-started/pytorch-2.0/), which includes the
  option to compile the PyTorch module for better runtime performance. This
  skorch release allows you to pass `compile=True` when initializing the net to
  enable compilation.
- Support for training on multiple GPUs with the help of the
  [`accelerate`](https://huggingface.co/docs/accelerate/index) package has been
  improved by fixing some bugs and providing a dedicated [history
  class](https://skorch.readthedocs.io/en/latest/user/history.html#distributed-history).
  Our documentation contains more information on [what to consider when training
  on multiple
  GPUs](https://skorch.readthedocs.io/en/latest/user/huggingface.html#caution-when-using-a-multi-gpu-setup).
- If you have ever been frustrated with your neural net not training properly,
  you know how hard it can be to discover the underlying issue. Using the new
  [`SkorchDoctor`](https://skorch.readthedocs.io/en/latest/helper.html#skorch.helper.SkorchDoctor)
  class will simplify the diagnosis of underlying issues. Take a look at the
  accompanying
  [notebook](https://nbviewer.org/github/skorch-dev/skorch/blob/master/notebooks/Skorch_Doctor.ipynb)

Apart from that, a few bugs have been fixed and the included notebooks have been
updated to properly install requirements on Google Colab.

We are grateful for external contributors, many thanks to:

- Kshiteej K (kshitij12345)
- Muhammad Abdullah (abdulasiraj)
- Royi (RoyiAvital)
- Sawradip Saha (sawradip)
- y10ab1 (y10ab1)

Find below the list of all changes since v0.12.1 below:

### Added
- Add support for compiled PyTorch modules using the `torch.compile` function,
  introduced in [PyTorch 2.0
  release](https://pytorch.org/get-started/pytorch-2.0/), which can greatly
  improve performance on new GPU architectures; to use it, initialize your net
  with the `compile=True` argument, further compilation arguments can be
  specified using the dunder notation, e.g. `compile__dynamic=True`
- Add a class
  [`DistributedHistory`](https://skorch.readthedocs.io/en/latest/history.html#skorch.history.DistributedHistory)
  which should be used when training in a multi GPU setting (#955)
- `SkorchDoctor`: A helper class that assists in understanding and debugging the
  neural net training, see [this
  notebook](https://nbviewer.org/github/skorch-dev/skorch/blob/master/notebooks/Skorch_Doctor.ipynb)
  (#912)
- When using `AccelerateMixin`, it is now possible to prevent unwrapping of the
  modules by setting `unwrap_after_train=True` (#963)

### Fixed
- Fixed install command to work with recent changes in Google Colab (#928)
- Fixed a couple of bugs related to using non-default modules and criteria
  (#927)
- Fixed a bug when using `AccelerateMixin` in a multi-GPU setup (#947)
- `_get_param_names` returns a list instead of a generator so that subsequent
  error messages return useful information instead of a generator `repr` string
  (#925)
- Fixed a bug that caused modules to not be sufficiently unwrapped at the end of
  training when using `AccelerateMixin`, which could prevent them from being
  pickleable (#963)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants