-
Notifications
You must be signed in to change notification settings - Fork 6.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
move node selection from --limit to --extra-vars=node<nodename>" #2948
Conversation
Added some basic Doku |
docs/getting-started.md
Outdated
@@ -52,10 +52,12 @@ Remove nodes | |||
You may want to remove **worker** nodes to your existing cluster. This can be done by re-running the `remove-node.yml` playbook. First, all nodes will be drained, then stop some kubernetes services and delete some certificates, and finally execute the kubectl command to delete these nodes. This can be combined with the add node function, This is generally helpful when doing something like autoscaling your clusters. Of course if a node is not working, you can remove the node and install it again. | |||
|
|||
- Add worker nodes to the list under kube-node if you want to delete them (or utilize a [dynamic inventory](https://docs.ansible.com/ansible/intro_dynamic_inventory.html)). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
delete it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What the complete chapter?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Because we delete the change of node mode, we need to delete 54 lines.
docs/getting-started.md
Outdated
@@ -52,10 +52,12 @@ Remove nodes | |||
You may want to remove **worker** nodes to your existing cluster. This can be done by re-running the `remove-node.yml` playbook. First, all nodes will be drained, then stop some kubernetes services and delete some certificates, and finally execute the kubectl command to delete these nodes. This can be combined with the add node function, This is generally helpful when doing something like autoscaling your clusters. Of course if a node is not working, you can remove the node and install it again. | |||
|
|||
- Add worker nodes to the list under kube-node if you want to delete them (or utilize a [dynamic inventory](https://docs.ansible.com/ansible/intro_dynamic_inventory.html)). | |||
- Use `--extra-vars "node=<nodenname>"` to select the node you want to delete |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
node=node1,node2,node3,...... Is that like this? when I delete nodes.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'am not sure if this will work in the pre/post tasks.
For hosts:
it is ok, but here
- - "{{ node }}"
i am not sure if ansible will make an array from a ,
seperated list.
But i will test it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok added it
@@ -9,7 +9,7 @@ | |||
--timeout {{ drain_timeout }} | |||
--delete-local-data {{ item }} | |||
with_items: | |||
- "{{ groups['kube-node'] }}" | |||
- "{{ kube-node }}" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
maybe {{ node }} ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fixed
ea80e02
to
16a46b3
Compare
docs/getting-started.md
Outdated
@@ -52,10 +52,12 @@ Remove nodes | |||
You may want to remove **worker** nodes to your existing cluster. This can be done by re-running the `remove-node.yml` playbook. First, all nodes will be drained, then stop some kubernetes services and delete some certificates, and finally execute the kubectl command to delete these nodes. This can be combined with the add node function, This is generally helpful when doing something like autoscaling your clusters. Of course if a node is not working, you can remove the node and install it again. | |||
|
|||
- Add worker nodes to the list under kube-node if you want to delete them (or utilize a [dynamic inventory](https://docs.ansible.com/ansible/intro_dynamic_inventory.html)). | |||
- Use `--extra-vars "node=<nodename>,<nodename2>"` to select the node you want to delete |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add a "." at the end of the sentence.
1163c0e
to
18e3245
Compare
remove-node.yml
Outdated
@@ -5,7 +5,7 @@ | |||
ansible_ssh_pipelining: true | |||
gather_facts: true | |||
|
|||
- hosts: etcd:k8s-cluster:vault:calico-rr | |||
- hosts: "{{ node }}" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am not opposed here. Let's default it though. Please change the line to:
hosts: "{{ node | default('etcd:k8s-cluster:vault:calico-rr') }}"
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i can change this, but may i ask why? do you want to delete all nodes? or want to use --limit
without per/post tasks?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
because you should not break previous behavior, there're other users.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ok, change will come.
remove-node.yml
Outdated
@@ -22,7 +22,7 @@ | |||
roles: | |||
- { role: remove-node/pre-remove, tags: pre-remove } | |||
|
|||
- hosts: kube-node | |||
- hosts: "{{ node }}" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
same as above
@@ -3,7 +3,7 @@ | |||
- name: Delete node | |||
command: kubectl delete node {{ item }} | |||
with_items: | |||
- "{{ groups['kube-node'] }}" | |||
- "{{ node.split(',') }}" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this should default to members of kube-node group
@@ -9,7 +9,7 @@ | |||
--timeout {{ drain_timeout }} | |||
--delete-local-data {{ item }} | |||
with_items: | |||
- "{{ groups['kube-node'] }}" | |||
- "{{ node.split(',') }}" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
same
why ? |
@ant31 the So as long the node defined in |
@@ -51,11 +51,12 @@ Remove nodes | |||
|
|||
You may want to remove **worker** nodes to your existing cluster. This can be done by re-running the `remove-node.yml` playbook. First, all nodes will be drained, then stop some kubernetes services and delete some certificates, and finally execute the kubectl command to delete these nodes. This can be combined with the add node function, This is generally helpful when doing something like autoscaling your clusters. Of course if a node is not working, you can remove the node and install it again. | |||
|
|||
- Add worker nodes to the list under kube-node if you want to delete them (or utilize a [dynamic inventory](https://docs.ansible.com/ansible/intro_dynamic_inventory.html)). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The description of these two kinds of deletions should be added. There is only one description at the moment.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- Add worker nodes to the list under kube-node if you want to delete them (or utilize a dynamic inventory).
ansible-playbook -i inventory/mycluster/hosts.ini remove-node.yml -b -v
--private-key=~/.ssh/private_key
We still need this description. Please add. Thanks.
d0f2ec4
to
a1c19fd
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
Hello,
while restricting remove-node.yml with
--limit
the pre and post task will not be executed!We don't have any valid host in group
kube_master
and--limit nodename
So on execution we get:
So i propose we use
--extra-vars
for selection and the full inventory.I changed the
hosts:
line in the node task and the selector in pre/post role task to"{{ node }}"
so we are able to call
ansible-playbook -i <inventory> remove-node.yml --extra-vars "node=<nodename>"
Additionally it will not breack reset.yml. here we can use it with
--limit
there are no task which need diffrent group/host.thanks in advanced,
mark