Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

move node selection from --limit to --extra-vars=node<nodename>" #2948

Merged
merged 2 commits into from
Jul 3, 2018

Conversation

qeqar
Copy link
Contributor

@qeqar qeqar commented Jun 29, 2018

Hello,

while restricting remove-node.yml with --limit the pre and post task will not be executed!

  roles:
    - { role: remove-node/pre-remove, tags: pre-remove }

We don't have any valid host in group kube_master and --limit nodename
So on execution we get:

skipping: no hosts matched

So i propose we use --extra-vars for selection and the full inventory.

I changed the hosts: line in the node task and the selector in pre/post role task to "{{ node }}"

so we are able to call ansible-playbook -i <inventory> remove-node.yml --extra-vars "node=<nodename>"

Additionally it will not breack reset.yml. here we can use it with --limit there are no task which need diffrent group/host.

thanks in advanced,
mark

@k8s-ci-robot k8s-ci-robot added cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. size/XS Denotes a PR that changes 0-9 lines, ignoring generated files. size/S Denotes a PR that changes 10-29 lines, ignoring generated files. and removed size/XS Denotes a PR that changes 0-9 lines, ignoring generated files. labels Jun 29, 2018
@qeqar
Copy link
Contributor Author

qeqar commented Jun 29, 2018

Added some basic Doku

@@ -52,10 +52,12 @@ Remove nodes
You may want to remove **worker** nodes to your existing cluster. This can be done by re-running the `remove-node.yml` playbook. First, all nodes will be drained, then stop some kubernetes services and delete some certificates, and finally execute the kubectl command to delete these nodes. This can be combined with the add node function, This is generally helpful when doing something like autoscaling your clusters. Of course if a node is not working, you can remove the node and install it again.

- Add worker nodes to the list under kube-node if you want to delete them (or utilize a [dynamic inventory](https://docs.ansible.com/ansible/intro_dynamic_inventory.html)).
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

delete it.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What the complete chapter?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Because we delete the change of node mode, we need to delete 54 lines.

@@ -52,10 +52,12 @@ Remove nodes
You may want to remove **worker** nodes to your existing cluster. This can be done by re-running the `remove-node.yml` playbook. First, all nodes will be drained, then stop some kubernetes services and delete some certificates, and finally execute the kubectl command to delete these nodes. This can be combined with the add node function, This is generally helpful when doing something like autoscaling your clusters. Of course if a node is not working, you can remove the node and install it again.

- Add worker nodes to the list under kube-node if you want to delete them (or utilize a [dynamic inventory](https://docs.ansible.com/ansible/intro_dynamic_inventory.html)).
- Use `--extra-vars "node=<nodenname>"` to select the node you want to delete
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

node=node1,node2,node3,...... Is that like this? when I delete nodes.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'am not sure if this will work in the pre/post tasks.

For hosts: it is ok, but here

-    - "{{ node }}"

i am not sure if ansible will make an array from a , seperated list.
But i will test it.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok added it

@@ -9,7 +9,7 @@
--timeout {{ drain_timeout }}
--delete-local-data {{ item }}
with_items:
- "{{ groups['kube-node'] }}"
- "{{ kube-node }}"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

maybe {{ node }} ?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fixed

@qeqar qeqar force-pushed the remove-node-limit branch 5 times, most recently from ea80e02 to 16a46b3 Compare June 29, 2018 19:46
@@ -52,10 +52,12 @@ Remove nodes
You may want to remove **worker** nodes to your existing cluster. This can be done by re-running the `remove-node.yml` playbook. First, all nodes will be drained, then stop some kubernetes services and delete some certificates, and finally execute the kubectl command to delete these nodes. This can be combined with the add node function, This is generally helpful when doing something like autoscaling your clusters. Of course if a node is not working, you can remove the node and install it again.

- Add worker nodes to the list under kube-node if you want to delete them (or utilize a [dynamic inventory](https://docs.ansible.com/ansible/intro_dynamic_inventory.html)).
- Use `--extra-vars "node=<nodename>,<nodename2>"` to select the node you want to delete
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Add a "." at the end of the sentence.

@qeqar qeqar force-pushed the remove-node-limit branch 2 times, most recently from 1163c0e to 18e3245 Compare June 30, 2018 11:50
@riverzhang
Copy link
Contributor

cc @mattymo @ant31

remove-node.yml Outdated
@@ -5,7 +5,7 @@
ansible_ssh_pipelining: true
gather_facts: true

- hosts: etcd:k8s-cluster:vault:calico-rr
- hosts: "{{ node }}"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am not opposed here. Let's default it though. Please change the line to:

hosts: "{{ node | default('etcd:k8s-cluster:vault:calico-rr') }}"

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i can change this, but may i ask why? do you want to delete all nodes? or want to use --limit without per/post tasks?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

because you should not break previous behavior, there're other users.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ok, change will come.

remove-node.yml Outdated
@@ -22,7 +22,7 @@
roles:
- { role: remove-node/pre-remove, tags: pre-remove }

- hosts: kube-node
- hosts: "{{ node }}"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

same as above

@@ -3,7 +3,7 @@
- name: Delete node
command: kubectl delete node {{ item }}
with_items:
- "{{ groups['kube-node'] }}"
- "{{ node.split(',') }}"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this should default to members of kube-node group

@@ -9,7 +9,7 @@
--timeout {{ drain_timeout }}
--delete-local-data {{ item }}
with_items:
- "{{ groups['kube-node'] }}"
- "{{ node.split(',') }}"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

same

@ant31
Copy link
Contributor

ant31 commented Jul 2, 2018

We don't have any valid host in group kube_master

why ?

@qeqar
Copy link
Contributor Author

qeqar commented Jul 2, 2018

@ant31 the --limit effects all hosts: definition. As we normally don't have any (or better the to be deleted) worker nodes in the kube_master group, the plays will be skipped.

So as long the node defined in --limit is not part of the kube_master group. it will never executed.

@qeqar qeqar force-pushed the remove-node-limit branch from 18e3245 to 8242216 Compare July 2, 2018 18:05
@@ -51,11 +51,12 @@ Remove nodes

You may want to remove **worker** nodes to your existing cluster. This can be done by re-running the `remove-node.yml` playbook. First, all nodes will be drained, then stop some kubernetes services and delete some certificates, and finally execute the kubectl command to delete these nodes. This can be combined with the add node function, This is generally helpful when doing something like autoscaling your clusters. Of course if a node is not working, you can remove the node and install it again.

- Add worker nodes to the list under kube-node if you want to delete them (or utilize a [dynamic inventory](https://docs.ansible.com/ansible/intro_dynamic_inventory.html)).
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The description of these two kinds of deletions should be added. There is only one description at the moment.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  • Add worker nodes to the list under kube-node if you want to delete them (or utilize a dynamic inventory).
    ansible-playbook -i inventory/mycluster/hosts.ini remove-node.yml -b -v
    --private-key=~/.ssh/private_key
    We still need this description. Please add. Thanks.

@qeqar qeqar force-pushed the remove-node-limit branch 2 times, most recently from d0f2ec4 to a1c19fd Compare July 3, 2018 08:37
@qeqar qeqar force-pushed the remove-node-limit branch from a1c19fd to af635ff Compare July 3, 2018 08:38
Copy link
Contributor

@riverzhang riverzhang left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@riverzhang riverzhang merged commit 32a6ca4 into kubernetes-sigs:master Jul 3, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. size/S Denotes a PR that changes 10-29 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants