helm repo add qdrant https://qdrant.github.io/qdrant-helm
helm repo update
helm upgrade -i your-qdrant-installation-name qdrant/qdrant
This chart installs and bootstraps a Qdrant instance.
- Kubernetes v1.24+ (as you need grpc probe)
- Helm
- PV provisioner (by the infrastructure)
You can install the chart from source via:
helm upgrade -i your-qdrant-installation-name charts/qdrant
Uninstall via:
helm uninstall your-qdrant-installation-name
Delete the volume with
kubectl delete pvc -l app.kubernetes.io/instance=your-qdrant-installation-name
For documentation of the settings please refer to Qdrant Configuration File
All of these configuration options could be overwritten under config in values.yaml
.
A modification example is provided there.
You can override any value in the Qdrant configuration by setting the Helm values under the key config
. Those settings get included verbatim in a file called config/production.yml
which is explained further here Qdrant Order and Priority as well as an example.
Running a distributed cluster just needs a few changes in your values.yaml
file.
Increase the number of replicas to the desired number of nodes and set config.cluster.enabled
to true.
Depending on your environment or cloud provider you might need to change the service in the values.yaml
as well.
For example on AWS EKS you would need to change the cluster.type
to NodePort
.
This Helm chart uses a Kubernetes StatefulSet to manage your Qdrant cluster. StatefulSets have many fields that are immutable, meaning that you cannot change these fields without deleting and recreating the StatefulSet. If you try to change these fields, you will get an error like this:
Error: UPGRADE FAILED: cannot patch "qdrant" with kind StatefulSet: StatefulSet.apps "qdrant" is invalid: spec: Forbidden: updates to statefulset spec for fields other than 'replicas', 'ordinals', 'template', 'updateStrategy', 'persistentVolumeClaimRetentionPolicy' and 'minReadySeconds' are forbidden
If you need to change any immutable field, the process is described below, using the most common example of expanding a PVC volume.
-
Delete the StatefulSet while leaving the Pods running:
kubectl delete statefulset --cascade=orphan qdrant
-
Manually edit all PersistentVolumeClaims to increase their sizes:
# For each PersistentVolumeClaim: kubectl edit pvc qdrant-storage-qdrant-0
-
Update your Helm values to match the new PVC size.
-
Reinstall the Helm chart using your updated values:
helm upgrade --install qdrant qdrant/qdrant -f my-values.yaml
Some storage providers allow resizing volumes in-place, but most require a pod restart before the new size will take effect:
kubectl rollout restart statefulset qdrant
In addition to immutable fields on StatefulSets, Pods also have some fields which are immutable, which means the above method may not work for some changes, such as setting snapshotPersistence.enabled: true
. In that case, after following the above method, you'll see an error like this when you kubectl describe
your StatefulSet:
pod updates may not change fields other than `spec.containers[*].image`,
`spec.initContainers[*].image`,`spec.activeDeadlineSeconds`,
`spec.tolerations` (only additions to existing tolerations),
`spec.terminationGracePeriodSeconds` (allow it to be set to 1 if it was previously negative)
To fix this, you must manually delete all of your Qdrant pods, starting with node-0. This will cause your cluster to go down, but will allow the StatefulSet to recreate your Pods with the correct configuration.
This helm chart allows you to restore a snapshot into your Qdrant cluster either from an internal or external PersistentVolumeClaim.
If you have set snapshotPersistence.enabled: true
(recommended for production), this helm chart will create a separate PersistentVolume for snapshots, and any snapshots you create will be stored in that PersistentVolume.
To restore from one of these snapshots, set the following values:
snapshotRestoration:
enabled: true
# Set blank to indicate we are not using an external PVC
pvcName: ""
snapshots:
- /qdrant/snapshots/<collection_name>/<filename>/:<collection_name>
And run "helm upgrade". This will restart your cluster and restore the specified collection from the snapshot. Qdrant will refuse to overwrite an existing collection, so ensure the collection is deleted before restoring.
After the snapshot is restored, remove the above values and run "helm upgrade" again to trigger another rolling restart. Otherwise, the snapshot restore will be attempted again if your cluster ever restarts.
If you wish to restore from an externally-created snapshot, using the API is recommended: https://qdrant.github.io/qdrant/redoc/index.html#tag/collections/operation/recover_from_uploaded_snapshot
If the file is too large, you can separatly create a PersistentVolumeClaim, store your data in there, and refer to this separate PersistentVolumeClaim in this helm chart.
Once you have created this PersistentVolumeClaim (must be in the same namespace as your Qdrant cluster), set the following values:
snapshotRestoration:
enabled: true
pvcName: "<the name of your PVC>"
snapshots:
- /qdrant/snapshots/<collection_name>/<filename>/:<collection_name>
And run "helm upgrade". This will restart your cluster and restore the specified collection from the snapshot. Qdrant will refuse to overwrite an existing collection, so ensure the collection is deleted before restoring.
After the snapshot is restored, remove the above values and run "helm upgrade" again to trigger another rolling restart. Otherwise, the snapshot restore will be attempted again if your cluster ever restarts.
Metrics are available through rest api (default port set to 6333) at /metrics
Refer to qdrant metrics configuration for more information.