We recommend using Conda to manage your environment. We use vLLM to accelerate inference. Run the following commands to setup your environment:
cd src
conda create -n mp python=3.9
conda activate mp
pip3 install torch==2.1.2 torchvision torchaudio
pip install -r requirements.txt
We provide a script for inference, simply config the MODEL_NAME_OR_PATH
and DATA
in ./scripts/infer.sh and run the following command:
bash ./scripts/infer.sh