-
Notifications
You must be signed in to change notification settings - Fork 40k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
v1.10 known issues / FAQ accumulator #59764
Comments
/sig release |
Is this part of the release process documented somewhere? |
adding kind /kind design |
Adjusting labels to keep this tracker in the milestone. /kind cleanup |
(seems this issue is not as useful and not documented as part of release process, I'll just close it, but feel free to reopen if necessary) |
#60764 |
Clarification, please? Do y'all want a relnote for #60933? |
Add #60764 plus related doc kubernetes/website#7731 |
Mount propagation manual steps: #61126 |
My opinion: yes. |
ACK. In progress |
From release burndown as of March 19: Flaky timeouts while waiting for RC pods to be running in density test |
@nickchase in case it's helpful here: think you don't need to worry about #61126 bc the relnote content is in the PR (such as it is ...). BUT it might not be in the right place in the generated relnotes. It's an action required thing. |
We need to doc downgrading and PVC protection. TL;DR: if you have PVCs, and need to downgrade, you need to downgrade to 1.9.6, which will be released next Wednesday, and not to an earlier version of 1.9. |
Yup. @nickchase see also comment from liggitt in kubernetes/website#7731 |
@nickchase let's coordinate with @msau42 about relnotes/docs. Slack discussion also. |
We also need to add those 2 known issues to 1.9, 1.8, 1.7 patch releases as well. |
Flaky timeouts while waiting for RC pods to be running in density test -- Appears to be fixed? |
Added * In large clusters (~2K nodes) scheduling logs can explode and crash the master node. (#60933) |
Yes on Controller-manager. Suggested text: "Some users, especially those with very large clusters, may see higher memory usage by the kube-controller-manager in 1.10." |
[MILESTONENOTIFIER] Milestone Issue: Up-to-date for process Issue Labels
|
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
/close |
Please link to this issue or comment on this issue with known errata for the 1.10.x release. This follows what we have done for prior releases.
We will be populating the "Known Issues" section of 1.10.0 and above release notes based on this issue
cc @jdumars @calebamiles @jberkus
cc @kubernetes/kubernetes-release-managers
The text was updated successfully, but these errors were encountered: