Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: Milvus does not start if I setup minio seperately in the same namespace #38637

Open
1 task done
mananpreetsingh opened this issue Dec 20, 2024 · 13 comments
Open
1 task done
Assignees
Labels
kind/bug Issues or changes related a bug triage/accepted Indicates an issue or PR is ready to be actively worked on.

Comments

@mananpreetsingh
Copy link

Is there an existing issue for this?

  • I have searched the existing issues

Environment

- Milvus version:  v2.5.0-beta
- Deployment mode(standalone or cluster): standalone
- MQ type(rocksmq, pulsar or kafka):    rocksmq
- SDK version(e.g. pymilvus v2.0.0rc2):
- OS(Ubuntu or CentOS): Ubuntu
- CPU/Memory: 8CPU/32GB
- GPU: None
- Others:

Current Behavior

When I deploy minio separately using its own helm chart and then install milvus referencing minio it does not work and throws error.

Expected Behavior

It should recognize the external minio.

Steps To Reproduce

1. Deploy minio using helm charts
2. Deploy milvus using helm charts use the below ocnfig in milvus:

milvus:
  minio:
    enabled: false
    tls:
      enabled: false
  extraConfigFiles:
    user.yaml: |+      
      minio:
        address: minio
        port: 9000
        bucketName: milvus
        accessKeyID: xxxxx
        secretAccessKey: xxxx
        useSSL: false
        rootPath: /
        useIAM: false
        useVirtualHost: false
        cloudProvider: minio
        iamEndpoint: null
        region: null
        requestTimeoutMs: 10000
        listObjectsMaxKeys: 0
      
      log:
        level: debug

Milvus Log

[2024/12/19 23:20:44.291 +00:00] [WARN] [client/client.go:100] ["RootCoordClient mess key not exist"] [key=rootcoord]
[2024/12/19 23:20:44.291 +00:00] [WARN] [grpcclient/client.go:255] ["failed to get client address"] [error="find no available rootcoord, check rootcoord state"]
[2024/12/19 23:20:44.291 +00:00] [WARN] [grpcclient/client.go:468] ["fail to get grpc client in the retry state"] [client_role=rootcoord] [error="find no available rootcoord, check rootcoord state"]
[2024/12/19 23:20:44.291 +00:00] [WARN] [retry/retry.go:130] ["retry func failed"] [retried=0] [error="empty grpc client: find no available rootcoord, check rootcoord state"] [errorVerbose="empty grpc client: find no available rootcoord, check rootcoord state\n(1) attached stack trace\n -- stack trace:\n | github.com/milvus-io/milvus/internal/util/grpcclient.(*ClientBase[...]).call.func2\n | \t/workspace/source/internal/util/grpcclient/client.go:481\n | github.com/milvus-io/milvus/pkg/util/retry.Handle\n | \t/workspace/source/pkg/util/retry/retry.go:128\n | github.com/milvus-io/milvus/internal/util/grpcclient.(*ClientBase[...]).call\n | \t/workspace/source/internal/util/grpcclient/client.go:474\n | github.com/milvus-io/milvus/internal/util/grpcclient.(*ClientBase[...]).Call\n | \t/workspace/source/internal/util/grpcclient/client.go:561\n | github.com/milvus-io/milvus/internal/util/grpcclient.(*ClientBase[...]).ReCall\n | \t/workspace/source/internal/util/grpcclient/client.go:577\n | github.com/milvus-io/milvus/internal/distributed/rootcoord/client.wrapGrpcCall[...]\n | \t/workspace/source/internal/distributed/rootcoord/client/client.go:117\n | github.com/milvus-io/milvus/internal/distributed/rootcoord/client.(*Client).GetComponentStates\n | \t/workspace/source/internal/distributed/rootcoord/client/client.go:131\n | github.com/milvus-io/milvus/internal/util/componentutil.WaitForComponentStates[...].func1\n | \t/workspace/source/internal/util/componentutil/componentutil.go:39\n | github.com/milvus-io/milvus/pkg/util/retry.Do\n | \t/workspace/source/pkg/util/retry/retry.go:44\n | github.com/milvus-io/milvus/internal/util/componentutil.WaitForComponentStates[...]\n | \t/workspace/source/internal/util/componentutil/componentutil.go:64\n | github.com/milvus-io/milvus/internal/util/componentutil.WaitForComponentHealthy[...]\n | \t/workspace/source/internal/util/componentutil/componentutil.go:85\n | github.com/milvus-io/milvus/internal/distributed/proxy.(*Server).init\n | \t/workspace/source/internal/distributed/proxy/service.go:476\n | github.com/milvus-io/milvus/internal/distributed/proxy.(*Server).Run\n | \t/workspace/source/internal/distributed/proxy/service.go:403\n | github.com/milvus-io/milvus/cmd/components.(*Proxy).Run\n | \t/workspace/source/cmd/components/proxy.go:60\n | github.com/milvus-io/milvus/cmd/roles.runComponent[...].func1\n | \t/workspace/source/cmd/roles/roles.go:129\n | runtime.goexit\n | \t/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.0.linux-amd64/src/runtime/asm_amd64.s:1695\nWraps: (2) empty grpc client\nWraps: (3) find no available rootcoord, check rootcoord state\nError types: (1) *withstack.withStack (2) *errutil.withPrefix (3) *errors.errorString"]
[2024/12/19 23:20:44.365 +00:00] [DEBUG] [server/rocksmq_retention.go:80] ["Rocksmq retention goroutine start!"]
[2024/12/19 23:20:44.365 +00:00] [INFO] [msgstream/mq_factory.go:244] ["init rocksmq msgstream success"] [path=/var/lib/milvus/rdb_data]
[2024/12/19 23:20:44.366 +00:00] [DEBUG] [sessionutil/session_util.go:263] ["Session try to connect to etcd"]
[2024/12/19 23:20:44.366 +00:00] [DEBUG] [sessionutil/session_util.go:278] ["Session connect to etcd success"]
[2024/12/19 23:20:44.368 +00:00] [DEBUG] [sessionutil/session_util.go:322] [getServerID] [reuse=true]
[2024/12/19 23:20:44.370 +00:00] [DEBUG] [sessionutil/session_util.go:399] ["Session get serverID success"] [key=id] [ServerId=249]
[2024/12/19 23:20:44.370 +00:00] [INFO] [sessionutil/session_util.go:296] ["start server"] [name=indexcoord] [address=10.244.0.65:13333] [id=249] [server_labels={}]
[2024/12/19 23:20:44.370 +00:00] [DEBUG] [sessionutil/session_util.go:263] ["Session try to connect to etcd"]
[2024/12/19 23:20:44.370 +00:00] [DEBUG] [sessionutil/session_util.go:278] ["Session connect to etcd success"]
[2024/12/19 23:20:44.371 +00:00] [INFO] [etcd/etcd_util.go:52] ["create etcd client"] [useEmbedEtcd=false] [useSSL=false] [endpoints="[milvus-etcd-0.milvus-etcd-headless:2379]"] [minVersion=1.3]
[2024/12/19 23:20:44.372 +00:00] [DEBUG] [sessionutil/session_util.go:322] [getServerID] [reuse=true]
[2024/12/19 23:20:44.372 +00:00] [INFO] [sessionutil/session_util.go:296] ["start server"] [name=datacoord] [address=10.244.0.65:13333] [id=249] [server_labels={}]
[2024/12/19 23:20:44.372 +00:00] [DEBUG] [sessionutil/session_util.go:263] ["Session try to connect to etcd"]
[2024/12/19 23:20:44.372 +00:00] [DEBUG] [sessionutil/session_util.go:278] ["Session connect to etcd success"]
[2024/12/19 23:20:44.372 +00:00] [INFO] [datacoord/server.go:343] ["init rootcoord client done"]
[2024/12/19 23:20:44.372 +00:00] [ERROR] [datacoord/server.go:550] ["chunk manager init failed"] [error="Endpoint url cannot have fully qualified paths."] [stack="github.com/milvus-io/milvus/internal/datacoord.(*Server).newChunkManagerFactory\n\t/workspace/source/internal/datacoord/server.go:550\ngithub.com/milvus-io/milvus/internal/datacoord.(*Server).initDataCoord\n\t/workspace/source/internal/datacoord/server.go:348\ngithub.com/milvus-io/milvus/internal/datacoord.(*Server).Init\n\t/workspace/source/internal/datacoord/server.go:330\ngithub.com/milvus-io/milvus/internal/distributed/datacoord.(*Server).init\n\t/workspace/source/internal/distributed/datacoord/service.go:137\ngithub.com/milvus-io/milvus/internal/distributed/datacoord.(*Server).Run\n\t/workspace/source/internal/distributed/datacoord/service.go:276\ngithub.com/milvus-io/milvus/cmd/components.(*DataCoord).Run\n\t/workspace/source/cmd/components/data_coord.go:60\ngithub.com/milvus-io/milvus/cmd/roles.runComponent[...].func1\n\t/workspace/source/cmd/roles/roles.go:129"]
[2024/12/19 23:20:44.372 +00:00] [ERROR] [datacoord/service.go:138] ["dataCoord init error"] [error="Endpoint url cannot have fully qualified paths."] [stack="github.com/milvus-io/milvus/internal/distributed/datacoord.(*Server).init\n\t/workspace/source/internal/distributed/datacoord/service.go:138\ngithub.com/milvus-io/milvus/internal/distributed/datacoord.(*Server).Run\n\t/workspace/source/internal/distributed/datacoord/service.go:276\ngithub.com/milvus-io/milvus/cmd/components.(*DataCoord).Run\n\t/workspace/source/cmd/components/data_coord.go:60\ngithub.com/milvus-io/milvus/cmd/roles.runComponent[...].func1\n\t/workspace/source/cmd/roles/roles.go:129"]
[2024/12/19 23:20:44.372 +00:00] [ERROR] [components/data_coord.go:61] ["DataCoord starts error"] [error="Endpoint url cannot have fully qualified paths."] [stack="github.com/milvus-io/milvus/cmd/components.(*DataCoord).Run\n\t/workspace/source/cmd/components/data_coord.go:61\ngithub.com/milvus-io/milvus/cmd/roles.runComponent[...].func1\n\t/workspace/source/cmd/roles/roles.go:129"]

Anything else?

In milvus config for minio for some reason If I use minio address like below milvus pod starts otherwise it just crashes and restarts.

  minio:
    address: minio:9000
    port: 9000

With above config above error changes to below:

`
[2024/12/20 15:00:43.997 +00:00] [WARN] [client/client.go:100] ["RootCoordClient mess key not exist"] [key=rootcoord]
[2024/12/20 15:00:43.997 +00:00] [WARN] [grpcclient/client.go:255] ["failed to get client address"] [error="find no available rootcoord, check rootcoord state"]
[2024/12/20 15:00:43.997 +00:00] [WARN] [grpcclient/client.go:461] ["fail to get grpc client"] [client_role=rootcoord] [error="find no available rootcoord, check rootcoord state"]
[2024/12/20 15:00:43.997 +00:00] [WARN] [grpcclient/client.go:482] ["grpc client is nil, maybe fail to get client in the retry state"] [client_role=rootcoord] [error="empty grpc client: find no available rootcoord, check rootcoord state"] [errorVerbose="empty grpc client: find no available rootcoord, check rootcoord state\n(1) attached stack trace\n -- stack trace:\n | github.com/milvus-io/milvus/internal/util/grpcclient.(*ClientBase[...]).call.func2\n | \t/workspace/source/internal/util/grpcclient/client.go:481\n | github.com/milvus-io/milvus/pkg/util/retry.Handle\n | \t/workspace/source/pkg/util/retry/retry.go:128\n | github.com/milvus-io/milvus/internal/util/grpcclient.(*ClientBase[...]).call\n | \t/workspace/source/internal/util/grpcclient/client.go:474\n | github.com/milvus-io/milvus/internal/util/grpcclient.(*ClientBase[...]).Call\n | \t/workspace/source/internal/util/grpcclient/client.go:561\n | github.com/milvus-io/milvus/internal/util/grpcclient.(*ClientBase[...]).ReCall\n | \t/workspace/source/internal/util/grpcclient/client.go:577\n | github.com/milvus-io/milvus/internal/distributed/rootcoord/client.wrapGrpcCall[...]\n | \t/workspace/source/internal/distributed/rootcoord/client/client.go:117\n | github.com/milvus-io/milvus/internal/distributed/rootcoord/client.(*Client).GetComponentStates\n | \t/workspace/source/internal/distributed/rootcoord/client/client.go:131\n | github.com/milvus-io/milvus/internal/util/componentutil.WaitForComponentStates[...].func1\n | \t/workspace/source/internal/util/componentutil/componentutil.go:39\n | github.com/milvus-io/milvus/pkg/util/retry.Do\n | \t/workspace/source/pkg/util/retry/retry.go:44\n | github.com/milvus-io/milvus/internal/util/componentutil.WaitForComponentStates[...]\n | \t/workspace/source/internal/util/componentutil/componentutil.go:64\n | github.com/milvus-io/milvus/internal/util/componentutil.WaitForComponentHealthy[...]\n | \t/workspace/source/internal/util/componentutil/componentutil.go:85\n | github.com/milvus-io/milvus/internal/distributed/proxy.(*Server).init\n | \t/workspace/source/internal/distributed/proxy/service.go:476\n | github.com/milvus-io/milvus/internal/distributed/proxy.(*Server).Run\n | \t/workspace/source/internal/distributed/proxy/service.go:403\n | github.com/milvus-io/milvus/cmd/components.(*Proxy).Run\n | \t/workspace/source/cmd/components/proxy.go:60\n | github.com/milvus-io/milvus/cmd/roles.runComponent[...].func1\n | \t/workspace/source/cmd/roles/roles.go:129\n | runtime.goexit\n | \t/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.0.linux-amd64/src/runtime/asm_amd64.s:1695\nWraps: (2) empty grpc client\nWraps: (3) find no available rootcoord, check rootcoord state\nError types: (1) *withstack.withStack (2) *errutil.withPrefix (3) *errors.errorString"]

[2024/12/20 15:00:45.331 +00:00] [INFO] [storage/remote_chunk_manager.go:97] ["remote chunk manager init success."] [remote=minio] [bucketname=milvus] [root=.]

[2024/12/20 15:00:46.113 +00:00] [INFO] [compaction/load_stats.go:83] ["begin to init pk bloom filter"] [segmentID=454705838661601680] [statsBinLogsLen=1]
[2024/12/20 15:00:46.115 +00:00] [WARN] [storage/remote_chunk_manager.go:207] ["failed to read object"] [path=stats_log/454705838661045424/454705838661045425/454705838661601680/108/454705838661246373] [error="The specified key does not exist.: key not found[key=stats_log/454705838661045424/454705838661045425/454705838661601680/108/454705838661246373]"] [errorVerbose="The specified key does not exist.: key not found[key=stats_log/454705838661045424/454705838661045425/454705838661601680/108/454705838661246373]\n(1) attached stack trace\n -- stack trace:\n | github.com/milvus-io/milvus/pkg/util/merr.WrapErrIoKeyNotFound\n | \t/workspace/source/pkg/util/merr/utils.go:887\n | github.com/milvus-io/milvus/internal/storage.checkObjectStorageError\n | \t/workspace/source/internal/storage/remote_chunk_manager.go:424\n | github.com/milvus-io/milvus/internal/storage.(*RemoteChunkManager).Read.func1\n | \t/workspace/source/internal/storage/remote_chunk_manager.go:205\n | github.com/milvus-io/milvus/pkg/util/retry.Do\n | \t/workspace/source/pkg/util/retry/retry.go:44\n | github.com/milvus-io/milvus/internal/storage.(*RemoteChunkManager).Read\n | \t/workspace/source/internal/storage/remote_chunk_manager.go:194\n | github.com/milvus-io/milvus/internal/storage.(*RemoteChunkManager).MultiRead\n | \t/workspace/source/internal/storage/remote_chunk_manager.go:235\n | github.com/milvus-io/milvus/internal/datanode/compaction.LoadStats\n | \t/workspace/source/internal/datanode/compaction/load_stats.go:121\n | github.com/milvus-io/milvus/internal/flushcommon/pipeline.initMetaCache.func1.1\n | \t/workspace/source/internal/flushcommon/pipeline/data_sync_service.go:164\n | github.com/milvus-io/milvus/pkg/util/conc.(*Pool[...]).Submit.func1\n | \t/workspace/source/pkg/util/conc/pool.go:81\n | github.com/panjf2000/ants/v2.(*goWorker).run.func1\n | \t/go/pkg/mod/github.com/panjf2000/ants/v2@v2.7.2/worker.go:67\n | runtime.goexit\n | \t/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.0.linux-amd64/src/runtime/asm_amd64.s:1695\nWraps: (2) The specified key does not exist.\nWraps: (3) key not found[key=stats_log/454705838661045424/454705838661045425/454705838661601680/108/454705838661246373]\nError types: (1) *withstack.withStack (2) *errutil.withPrefix (3) merr.milvusError"]
[2024/12/20 15:00:46.115 +00:00] [WARN] [retry/retry.go:46] ["retry func failed"] [retried=0] [error="The specified key does not exist.: key not found[key=stats_log/454705838661045424/454705838661045425/454705838661601680/108/454705838661246373]"] [errorVerbose="The specified key does not exist.: key not found[key=stats_log/454705838661045424/454705838661045425/454705838661601680/108/454705838661246373]\n(1) attached stack trace\n -- stack trace:\n | github.com/milvus-io/milvus/pkg/util/merr.WrapErrIoKeyNotFound\n | \t/workspace/source/pkg/util/merr/utils.go:887\n | github.com/milvus-io/milvus/internal/storage.checkObjectStorageError\n | \t/workspace/source/internal/storage/remote_chunk_manager.go:424\n | github.com/milvus-io/milvus/internal/storage.(*RemoteChunkManager).Read.func1\n | \t/workspace/source/internal/storage/remote_chunk_manager.go:205\n | github.com/milvus-io/milvus/pkg/util/retry.Do\n | \t/workspace/source/pkg/util/retry/retry.go:44\n | github.com/milvus-io/milvus/internal/storage.(*RemoteChunkManager).Read\n | \t/workspace/source/internal/storage/remote_chunk_manager.go:194\n | github.com/milvus-io/milvus/internal/storage.(*RemoteChunkManager).MultiRead\n | \t/workspace/source/internal/storage/remote_chunk_manager.go:235\n | github.com/milvus-io/milvus/internal/datanode/compaction.LoadStats\n | \t/workspace/source/internal/datanode/compaction/load_stats.go:121\n | github.com/milvus-io/milvus/internal/flushcommon/pipeline.initMetaCache.func1.1\n | \t/workspace/source/internal/flushcommon/pipeline/data_sync_service.go:164\n | github.com/milvus-io/milvus/pkg/util/conc.(*Pool[...]).Submit.func1\n | \t/workspace/source/pkg/util/conc/pool.go:81\n | github.com/panjf2000/ants/v2.(*goWorker).run.func1\n | \t/go/pkg/mod/github.com/panjf2000/ants/v2@v2.7.2/worker.go:67\n | runtime.goexit\n | \t/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.0.linux-amd64/src/runtime/asm_amd64.s:1695\nWraps: (2) The specified key does not exist.\nWraps: (3) key not found[key=stats_log/454705838661045424/454705838661045425/454705838661601680/108/454705838661246373]\nError types: (1) *withstack.withStack (2) *errutil.withPrefix (3) merr.milvusError"]`

@mananpreetsingh mananpreetsingh added kind/bug Issues or changes related a bug needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. labels Dec 20, 2024
@yanliang567
Copy link
Contributor

/assign @LoveEachDay
please hep to take a look

/unassign

@yanliang567 yanliang567 added triage/accepted Indicates an issue or PR is ready to be actively worked on. and removed needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. labels Dec 21, 2024
@xiaofan-luan
Copy link
Collaborator

@mananpreetsingh
Endpoint url cannot have fully qualified paths.

please check your url or offer your config files so we can point out your problem

@mananpreetsingh
Copy link
Author

mananpreetsingh commented Dec 26, 2024

@mananpreetsingh Endpoint url cannot have fully qualified paths.

please check your url or offer your config files so we can point out your problem

@xiaofan-luan I did mention the config above, this is the only custom config I have added except default helm config. I have not specifed any endpoint config anywhere except minio address which I mentioned above.

simple way to reproduce it:

  • Deploy minio in a namespace
  • Deploy milvus in a same namespace to use the existing minio.

@xiaofan-luan
Copy link
Collaborator

@mananpreetsingh Endpoint url cannot have fully qualified paths.
please check your url or offer your config files so we can point out your problem

@xiaofan-luan I did mention the config above, this is the only custom config I have added except default helm config. I have not specifed any endpoint config anywhere except minio address which I mentioned above.

simple way to reproduce it:

  • Deploy minio in a namespace
  • Deploy milvus in a same namespace to use the existing minio.

The reason is milvus see some meta on etcd, but the file is not on minio.

We don't know details, but some reasonable guess:

  1. Two milvus share the same bucket, and delete each others data
  2. There are some garbage left on etcd for your last cluster..

Your Minio is init successfully, now the problem is that from meta milvus thought some data should be there but minio don't have that data.

@mananpreetsingh
Copy link
Author

@xiaofan-luan Thanks for the explanation.

  1. There is only one milvus in a namespace.
  2. I agree with minio creates bucket and init is successfully, probably looking for some meta data.

Is it even possible for milvus to use minio which is not created by milvus?

  • My requirement is my application needs minio, and I am happy to use same minio (different bucket) for milvu (same namespace).
  • I have no issues having two minio ine same namespace (one is created by me for application and other is by milvus) but problem is with two minio's, milvus gets confused with env var for these minio and not able to start at all.
  • I also tried using standalone milvus (single pod, no etcd and minio pod), problem with this approach when milvus restarts for some reason (even volume is persistent, milvus data gets deleted on milvus restart.

Need help how to make milvus working in a same namespace where minio is.

@mananpreetsingh
Copy link
Author

@LoveEachDay / @xiaofan-luan Any idea on achieving this, I am sure this is very common use case to run minio and milvus(with minio) in same namespace.

@xiaofan-luan
Copy link
Collaborator

milvus is usually seperate by bucket name.
you can specify bucket with minio.bucketName config to let milvus using two different bucketname .

you will also need to change etcd.rootpath and msgChannel.chanNamePrefix.cluster as well

@mananpreetsingh
Copy link
Author

mananpreetsingh commented Jan 7, 2025

@xiaofan-luan In minio, bucket already exists for milvus and I already have minio.bucketName config setup.

Did not know about etcd.rootpath and msgChannel.chanNamePrefix.cluster config If I need to change for using external minio, Wonder what difference would it make , FYI I am using default setting for etcd, so it is standalone milvus but etcd is on different pod. I did not see this config it in values file. So I should use any custom values for this?

Also does this needs to be in milvus configmap or specify directly in value file?

@xiaofan-luan
Copy link
Collaborator

I acutally don't understand your requirment.
Using exisiting minio simply means you need setup milvus to a right minio endpoint.
if you want two milvus share same etcd, pulsar, and minio then you need to worry about the config I said.
In minio, there is no concept called namespace. all user can be isolated by bucket name

@mananpreetsingh
Copy link
Author

mananpreetsingh commented Jan 7, 2025

Requirement: My application helmchart has two dependencies minio and milvus. I needed both in same namespace. So in single namespace: my-app, minio, milvus these are needed. Since milvus also require minio. So there will be two minio, One is created by minio chart other by milvus chart.
I was wonder if I can leverage existing minio by the milvus, rather than creating second minio by milvus or if these two minio can exist in same namespace.

I tried to use the existing minio (dedicated bucket for milvus) and milvus(without minio) and get above error. Hope this clarifies the issue?

@xiaofan-luan
Copy link
Collaborator

Requirement: My application helmchart has two dependencies minio and milvus. I needed both in same namespace. So in single namespace: my-app, minio, milvus these are needed. Since milvus also require minio. So there will be two minio, One is created by minio chart other by milvus chart. I was wonder if I can leverage existing minio by the milvus, rather than creating second minio by milvus or if these two minio can exist in same namespace.

I tried to use the existing minio (dedicated bucket for milvus) and milvus(without minio) and get above error. Hope this clarifies the issue?

You can definitely share minio for your application and milvus.
@LoveEachDay could you give some instruction on how this can be done?

@LoveEachDay
Copy link
Contributor

@mananpreetsingh If you deploy a minio cluster in the same namespace with milvus, the svc env will injected to milvus pods, which has higher precedence than config file.

You'd better change the minio cluster name to anything other than minio instead.

@LoveEachDay
Copy link
Contributor

Or you can separate milvus and minio into difference namespace.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Issues or changes related a bug triage/accepted Indicates an issue or PR is ready to be actively worked on.
Projects
None yet
Development

No branches or pull requests

4 participants