Description
Developer News
The Kubernetes release schedule is changing, to 3 releases a year, starting with this year. Per the KEP
and SIG-Release Meeting, we will be releasing new versions of Kubernetes on the following tentative schedule:
- 1.22: August 2, 2021
- 1.23: December 14, 2021
- 1.24: April 12, 2022
- 1.25: August 22, 2022
- 1.26: December 6, 2022
Moving to 3 releases a year is expected to make things easier on the SIGs and the release teams. If it doesn’t, we’ll revert back to quarterly releases in 2023.
Rogerio Bastos & Ari Lima discovered CVE-2021-25735, a security hole that allows bypassing some Admission Webhooks, now fixed in the latest update releases.
The Slack team has deployed an “inclusive language” Slack Bot, which is just there to remind you not to use exclusive language like “guys”. This is a bit of an experiment, so we’ll see how it goes.
SIG-CLI plans to overhaul kubectl exit codes and would like your feedback. Will this break your scripts, or fix them?
Release Schedule
Next Deadline: 1.22 Release Cycle Begins this week
The 1.22 Release Team has been chosen, and work on the release will start this week. Expect calls for enhancements soon.
1.20.6, 1.19.10, and 1.18.18 are available. Among other things, these fix CVE-2021-25735, so install real soon if you use Admission Webhooks.
Featured PRs
#101155: allow multiple of –service-account-issuer
As part of the continued push towards a customizable service account JWT system, there is now a migration path towards changing the issuer field. While only the first configured issuer will be used for creating new JWTs, any of them will be accepted as valid when checking existing tokens. This allows for a smooth migration onto a non-default issuer string without downtime for pod tokens. If you haven’t checked out the token volume system, or have been putting off playing with it due to the complexities of the rollout, this may help and thus will unlock a powerful set of tools for pod identity validation.
#99237: Use the audit ID of a request for better correlation
Distributed tracing fans rejoice! Kube-APIServer will now make better use of the existing “audit ID” concept to be more like a span ID for tracing purposes. This allows for better correlation between tracing tools, error/access logs, and aggregated API requests. If you have existing log parsing for error analysis, probably add this field once it’s available for you.
#101151: Add “node-high” priority-level
This new node-high
API priority level has been added to the standard configuration to ensure that even during an overload situation, kubelet status updates and heartbeats will (probably) get processed. This avoids some terrible priority inversion situation where an overload from too many pods starting up never ends as they keep getting rescheduled off “failed” nodes. If you have a customized API fairness configuration, check out this new addition and consider adding something equivalent to your infrastructure.
#101048: Revert Revert
Promotion of the MemoryBackedVolumeSizing feature to beta (for setting quotas on EmptyDir) was reverted and taken out of 1.21. The feature promotion has been added back to 1.22; hopefully it passes tests this time.
Other Merges
- Isolate logging resources into a
logging
namespace - Server Side Apply will treat all label selectors as atomic(also)
kubectl drain --chunk-size
lets you drain nodes without overwhelming the client with huge resource lists- Created
policy/v1
API for Eviction - Keep cloud provider credential enablement checks from taking forever
- Stop asking for the AppArmor parser; we don’t need it
- Give the Kube Controller-Manager client a timeout of 70s
- Now you can add custom HTTP behavior to your delegated auth clients
- Plug a memory leak in port-forwarded connections
- Make sure the job controller removes all pods on completion
- Backport etcd lease churn fixes to all supported versions
- Round volume sizes properly when requesting storage, including adding new functions and parameters in the rounding helper
- PATCH operations return HTTP 201
- New kubemark test parameters:
--max-pods
and--extended-resources
, and it will log flags for hollow nodes before each run rest\_client\_rate\_limiter\_duration\_seconds
metric now actually records data- If Nodeport creation fails, send a warning event
- Scheduler framework embeds access to the kubernetes config
- We have to allow any user to check /readyz and /livez on APIserver to keep the kubelet from unnecessary restarts, further work TBD
service.kubernetes.io/topology-aware-hints
gets anauto
option- kube-proxy measures latency better
kubeadm config user
expires certificates
Structured logging migration: linux volumes
Promotions
- PodDeletionCost to beta, pushing the smarter scale down Enhancement forwards
Deprecated
- Deprecated APIserver flag
--kubelet-https
is deleted
Version Updates
- go to 1.15.11 in 1.19 and 1.20
- go to 1.16.3 in 1.21 and 1.22
- Cluster Autoscaler to v1.20.0 in Kubernetes v1.20.1
- Structured-merge-diff to v4.1.1
- CRI-tools to 1.21.0
- Debian to 1.6.0 in testing images
- Built-in Kustomize to 4.1.2