-
Notifications
You must be signed in to change notification settings - Fork 117
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Update the example model for metatensor #315
Conversation
b17e4f6
to
267fe17
Compare
@ceriottm following metatensor/metatensor#539, should I also add the ability to specify dtype/device in the i-PI interface? If yes, what's the best way to do this? Adding a new command line parameter? |
Isn't this specified by the model? I-PI expects double precision and I think there are so many overheads that it wouldn't be worth having the option to run in single precision. So the driver should convert the output to double precision before returning it - I think we're doing that already for PET and MACE. Did I miss something? |
So if the device is not provided, it will be picked by the ASE calculator according to what the model prefers and what's available. The dtype will default to 64-bit floats if not specified, but that would break in some cases (e.g. when using the MPS device, only float32 is supported). Regardless of how the model is running, the data returned by the driver can always be converted back to 64-bit, CPU data. |
267fe17
to
8b5314c
Compare
@ceriottm this should be ready for review now! |
8b5314c
to
1eb3244
Compare
OK, quickly tested, couldn't get it to run.
I think the README needs instructions to install the necessary dependencies - if they are there , I couldn't see them. |
You need to install the version of metatensor-torch which is on PyPI, i.e. just |
a1673fb
to
9a4ad3b
Compare
9a4ad3b
to
4e77eda
Compare
Should I look at this again? Is the pip version of metatensor-torch already including the faster NL? |
No, the new NL is only on master. This can wait a bit more. |
3f802c2
to
618bef0
Compare
@Luthaf can this be updated to the new NL so I can merge it? |
This should now be updated to use metatensor-torch v0.5, which has the improved NL! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
All good. Still think 0.1s/step for 32 LJ atoms is not the best advertisement for metatensor, but we are going to do some repo madness and it's best to have this merged now.
Well, doing LJ with PyTorch is a bit silly anyway, but it is a simple test for the whole pipeline. We could do something nicer with some proper ML model later. |
This updates the example model with the latest changes in metatensor.
It also adds a print to the driver, making the output similar to