This is a utility package and CLI for building mlserver docker images based on MLflow models on a remote tracking server.
Currently, the S3 backend is the only officially supported backend, but any backend should work just fine as long as you have all the required dependencies installed. If you're interested in other backends let me know.
I recommend using pipx, but you can use any method you like such as
poetry
or plain pip
.
pipx install mlflow-mlserver-docker
Configure access to the mlflow tracking server and the S3 backend using environment variables, following their respective documentation.
mlflow-mlserver-docker build runs:/fg8934ug54eg9hrdegu904/model --tag myimage:mytag
Any
mlflow artifact URL
should work fine, as long as the model uses the MLflow packaging format and contains conda.yaml
. I
have only tested it with scikit-learn
models logged with mlflow.autolog()
.
poetry install
Make sure docker
is running.
docker run -it -p 8080:8080 myimage:mytag