Converting model to AWS Inferentia hardware using Optimum-cli #90
Open
Description
I'm trying to run the model on AWS Inferentia (inf1 hardware) for model deployment however I can't actually seem to get the optimum-cli neuron tooling to work.
Has anyone had similar experience?
- Launched inf1 hardware
- Installed python 3.9
- Installed optimum-cli
- Ran optimizer command
optimum-cli export neuron --model /root/multilingual_debiased-0b549669.ckpt --task token-classification --batch_size 30 --sequence_length 512 --auto_cast matmul --auto_cast_type bf16 multilingual_debiased-0b549669
Metadata
Assignees
Labels
No labels