-
Notifications
You must be signed in to change notification settings - Fork 40.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Update status to show failing services. #49296
Update status to show failing services. #49296
Conversation
Hi @ktsakalozos. Thanks for your PR. I'm waiting for a kubernetes member to verify that this patch is reasonable to test. If it is, they should reply with I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. |
/ok-to-test |
/assign @Cynerva |
/retest |
daemon = 'snap.{}.daemon'.format(service) | ||
if not _systemctl_is_active(daemon): | ||
hookenv.log("Service {} id down. Starting it.".format(daemon)) | ||
sleep(10) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Missing the actual service restart here
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is the sleep here necessary? If so, does it need to be this long?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Your comment prompted me to look more into this. In short, yes a waiting period is needed but we should not try to restart the services like this here.
In more detail, the sleep creates a waiting period in which we are waiting for a potential failure. To see why this sleep is needed one can remove it completely, introduce a wrong variable in /var/snap//current/args, systemctl stop , and then trigget a hooks/update-status. The update status will see the service being down and will restart it. However, will not fail immediately so the status message will report that everything is OK. On the next status-update cycle we will have the same, service will restart but will not fail in time and we will be reporting that everything is ok. So, the sleep creates a wait period in which we expect the service to fail. This is problematic because there is no proper way to estimate what this waiting period would be. 10 seconds is enough for a wrong argument to be detected on an aws instance but overall there is no safe guess.
My suggestion is to trust systemd to restart failing services. If a service is down that means that systemd is failing to restart the service, in which case the admin should take a look. We could assist the admin by adding some actions to inspect the services and restart them.
if not host.service_running(daemon): | ||
hookenv.log("Service {} was down. Starting it.".format(daemon)) | ||
host.service_start(daemon) | ||
sleep(10) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this sleep necessary? If so, does it need to be this long?
Thanks @ktsakalozos, looks good 👍 |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: Cynerva, ktsakalozos Associated issue requirement bypassed by: Cynerva The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these OWNERS Files:
You can indicate your approval by writing |
/test all [submit-queue is verifying that this PR is safe to merge] |
@ktsakalozos: The following test failed, say
Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. |
Automatic merge from submit-queue (batch tested with PRs 49420, 49296, 49299, 49371, 46514) |
What this PR does / why we need it: Report on charm status any services that are not running.
Which issue this PR fixes (optional, in
fixes #<issue number>(, fixes #<issue_number>, ...)
format, will close that issue when PR gets merged): fixes https://github.com/juju-solutions/bundle-canonical-kubernetes/issues/341Special notes for your reviewer:
Release note: