Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update the example model for metatensor #315

Merged
merged 2 commits into from
May 22, 2024

Conversation

Luthaf
Copy link
Contributor

@Luthaf Luthaf commented Feb 27, 2024

This updates the example model with the latest changes in metatensor.

It also adds a print to the driver, making the output similar to

$ i-pi-py_driver -a metatensor -u -m metatensor -o nickel.xyz,nickel-lj.pt
This is the Nickel Lennard-Jones model
======================================

A basic implementation of Lennard-Jones using metatensor, to be used for
testing the metatensor integration of various simulation engines.

Model references
----------------

Please cite the following references when using this model:
- about the implementation of this model:
  * https://github.com/lab-cosmo/metatensor

@Luthaf Luthaf marked this pull request as ready for review March 4, 2024 14:30
@Luthaf Luthaf requested a review from ceriottm March 4, 2024 14:30
@Luthaf
Copy link
Contributor Author

Luthaf commented Mar 5, 2024

@ceriottm following metatensor/metatensor#539, should I also add the ability to specify dtype/device in the i-PI interface? If yes, what's the best way to do this? Adding a new command line parameter?

@ceriottm
Copy link
Contributor

ceriottm commented Mar 6, 2024

Isn't this specified by the model? I-PI expects double precision and I think there are so many overheads that it wouldn't be worth having the option to run in single precision. So the driver should convert the output to double precision before returning it - I think we're doing that already for PET and MACE. Did I miss something?

@Luthaf
Copy link
Contributor Author

Luthaf commented Mar 6, 2024

So if the device is not provided, it will be picked by the ASE calculator according to what the model prefers and what's available.

The dtype will default to 64-bit floats if not specified, but that would break in some cases (e.g. when using the MPS device, only float32 is supported).


Regardless of how the model is running, the data returned by the driver can always be converted back to 64-bit, CPU data.

@Luthaf
Copy link
Contributor Author

Luthaf commented Apr 15, 2024

@ceriottm this should be ready for review now!

@ceriottm
Copy link
Contributor

OK, quickly tested, couldn't get it to run.

ImportError: this code is only compatible with metatensor-torch v0.4.x, found version v0.5.0.dev2 at '/home/michele/local/lib/python3.10/site-packages/metatensor/torch/__init__.py'

I think the README needs instructions to install the necessary dependencies - if they are there , I couldn't see them.

@Luthaf
Copy link
Contributor Author

Luthaf commented Apr 17, 2024

this code is only compatible with metatensor-torch v0.4.x, found version v0.5.0.dev2

You need to install the version of metatensor-torch which is on PyPI, i.e. just pip install metatensor[torch]. I'll add that to the README

@ceriottm
Copy link
Contributor

Should I look at this again? Is the pip version of metatensor-torch already including the faster NL?

@Luthaf
Copy link
Contributor Author

Luthaf commented Apr 29, 2024

No, the new NL is only on master. This can wait a bit more.

@ceriottm ceriottm force-pushed the update-metatensor branch 2 times, most recently from 3f802c2 to 618bef0 Compare May 15, 2024 18:33
@ceriottm
Copy link
Contributor

@Luthaf can this be updated to the new NL so I can merge it?

@Luthaf
Copy link
Contributor Author

Luthaf commented May 16, 2024

This should now be updated to use metatensor-torch v0.5, which has the improved NL!

Copy link
Contributor

@ceriottm ceriottm left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

All good. Still think 0.1s/step for 32 LJ atoms is not the best advertisement for metatensor, but we are going to do some repo madness and it's best to have this merged now.

@ceriottm ceriottm merged commit 68fb0b3 into i-pi:master May 22, 2024
4 checks passed
@ceriottm ceriottm deleted the update-metatensor branch May 22, 2024 10:16
@Luthaf
Copy link
Contributor Author

Luthaf commented May 22, 2024

Still think 0.1s/step for 32 LJ atoms is not the best advertisement for metatensor

Well, doing LJ with PyTorch is a bit silly anyway, but it is a simple test for the whole pipeline. We could do something nicer with some proper ML model later.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants