-
Notifications
You must be signed in to change notification settings - Fork 93
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Model Saving & Reloading #84
Comments
Hi Marius. Yes, it's easy to do that as long as you know basic usage of PyTorch. For saving and loading model, here's a useful link. In the context of
my_nets = [MyNetwork(...), MyNetwork(...), ...]
solver = Solver1D(
...
nets=my_nets,
)
solver.fit(max_epochs=xxx, ...)
my_nets = solver.nets # you can skip this step if you still have access to `my_nets` created earlier
torch.save({f'net_{i}': net.state_dict() for i, net in enumerate(nets)}, YOUR_MODEL_PATH)
loaded_nets = [MyNetwork(...), MyNetwork(...), ...]
checkpoint = torch.load(YOUR_MODEL_PATH)
for i, net in enumerate(loaded_nets):
net.load_state_dict(checkpoint[f'net_{i}'])
|
That sounds great! And it is possible to use both Adam optimiser and L-BFGS/L-BFGS-B, right? |
Most optimizers are currently supported, except LBFGS, which is a little tricky (see #83). Luckily, we seem to have a solution proposed just now. Yet, we still need to run the tests. I'm not familiar with L-BFGS-B, but it appears that this optimizer has not been implemented in PyTorch (see here). So currently, you can't use L-BFGS-B without implementing it yourself. |
Hi,
I am studying how transfer learning can enhance the training of physics-informed neural networks. The NeuroDiffEq sparked my interest and I was wondering whether it is possible to
save a trained model, i.e. the parameters of the network and its architecture
reload the saved model and continue training from that non-random state.
The text was updated successfully, but these errors were encountered: