-
Notifications
You must be signed in to change notification settings - Fork 40k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Migrate Kubelet --provider-id to kubelet.config.k8s.io or remove the flag #61658
Comments
[MILESTONENOTIFIER] Milestone Issue Needs Attention @cheftako @mtaufen @wlan0 @kubernetes/sig-node-misc Action required: During code slush, issues in the milestone should be in progress. Note: If this issue is not resolved or labeled as Issue Labels
|
/milestone clear |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
10 similar comments
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
2 similar comments
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
/remove-lifecycle rotten
/lifecycle frozen
…On Mon, Oct 22, 2018 at 7:26 AM fejta-bot ***@***.***> wrote:
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta
<https://github.com/fejta>.
/lifecycle rotten
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#61658 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AA3JwfGlRQMEQVEyjjTcPbNHUY6JOoTOks5undV9gaJpZM4S6VoQ>
.
--
Michael Taufen
Google SWE
|
Could be tracked by #86843 |
SIG Node Bug Scrub has decided to close the 23 individual flag tracking issues in favour of using the unified tracker in #86843. We will list each issue here that we're closing and the associated flag. /close |
@ehashman: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Flag name:
provider-id
Help text:
Unique identifier for identifying the node in a machine database, i.e cloudprovider
This is part of migrating the Kubelet command-line to a Kubernetes-style API.
The
--provider-id
flag should either be migrated to the Kubelet'skubelet.config.k8s.io
API group, or simply removed from the Kubelet.
If this could be considered an instance-specific flag, or a descriptor of local topology
managed by the Kubelet, see: #61647.
If this flag is only registered in os-specific builds, see: #61649.
@sig-node-pr-reviews @sig-node-api-reviews
/assign @mtaufen
/sig node
/kind feature
/priority important-soon
/milestone v1.11
/status approved-for-milestone
The text was updated successfully, but these errors were encountered: