This is the multi-page printable view of this section. Click here to print.

Return to the regular view of this page.

Releases

The Kubernetes project maintains release branches for the most recent three minor releases (1.25, 1.24, 1.23). Kubernetes 1.19 and newer receive approximately 1 year of patch support. Kubernetes 1.18 and older received approximately 9 months of patch support.

Kubernetes versions are expressed as x.y.z, where x is the major version, y is the minor version, and z is the patch version, following Semantic Versioning terminology.

More information in the version skew policy document.

Release History

1.25

Latest Release:1.25.4 (released: )
End of Life:
Patch Releases: 1.25.1, 1.25.2, 1.25.3, 1.25.4

Complete 1.25 Schedule and Changelog

1.24

Latest Release:1.24.8 (released: )
End of Life:
Patch Releases: 1.24.1, 1.24.2, 1.24.3, 1.24.4, 1.24.5, 1.24.6, 1.24.7, 1.24.8

Complete 1.24 Schedule and Changelog

1.23

Latest Release:1.23.14 (released: )
End of Life:

Complete 1.23 Schedule and Changelog

1.22

Latest Release:1.22.15 (released: )
End of Life:

Complete 1.22 Schedule and Changelog

Upcoming Release

Check out the schedule for the upcoming 1.26 Kubernetes release!

Helpful Resources

1 - Download Kubernetes

Kubernetes ships binaries for each component as well as a standard set of client applications to bootstrap or interact with a cluster. Components like the API server are capable of running within container images inside of a cluster. Those components are also shipped in container images as part of the official release process. All binaries as well as container images are available for multiple operating systems as well as hardware architectures.

Container Images

All Kubernetes container images are deployed to the registry.k8s.io container image registry.

FEATURE STATE: Kubernetes v1.24 [alpha]

For Kubernetes v1.25, the following container images are signed using cosign signatures:

Container Image Supported Architectures
registry.k8s.io/kube-apiserver:v1.25.0 amd64, arm, arm64, ppc64le, s390x
registry.k8s.io/kube-controller-manager:v1.25.0 amd64, arm, arm64, ppc64le, s390x
registry.k8s.io/kube-proxy:v1.25.0 amd64, arm, arm64, ppc64le, s390x
registry.k8s.io/kube-scheduler:v1.25.0 amd64, arm, arm64, ppc64le, s390x
registry.k8s.io/conformance:v1.25.0 amd64, arm, arm64, ppc64le, s390x

All container images are available for multiple architectures, whereas the container runtime should choose the correct one based on the underlying platform. It is also possible to pull a dedicated architecture by suffixing the container image name, for example registry.k8s.io/kube-apiserver-arm64:v1.25.0. All those derivations are signed in the same way as the multi-architecture manifest lists.

The Kubernetes project publishes a list of signed Kubernetes container images in SPDX 2.2 format. You can fetch that list using:

curl -Ls "https://sbom.k8s.io/$(curl -Ls https://dl.k8s.io/release/latest.txt)/release"  | awk '/Package: registry.k8s.io\// {print $3}'

For Kubernetes v1.25, the only kind of code artifact that you can verify integrity for is a container image, using the experimental signing support.

To manually verify signed container images of Kubernetes core components, refer to Verify Signed Container Images.

Binaries

Find links to download Kubernetes components (and their checksums) in the CHANGELOG files.

Alternately, use downloadkubernetes.com to filter by version and architecture.

kubectl

The Kubernetes command-line tool, kubectl, allows you to run commands against Kubernetes clusters.

You can use kubectl to deploy applications, inspect and manage cluster resources, and view logs. For more information including a complete list of kubectl operations, see the kubectl reference documentation.

kubectl is installable on a variety of Linux platforms, macOS and Windows. Find your preferred operating system below.

2 - Kubernetes Release Cycle

Targeting enhancements, Issues and PRs to Release Milestones

This document is focused on Kubernetes developers and contributors who need to create an enhancement, issue, or pull request which targets a specific release milestone.

The process for shepherding enhancements, issues, and pull requests into a Kubernetes release spans multiple stakeholders:

  • the enhancement, issue, and pull request owner(s)
  • SIG leadership
  • the Release Team

Information on workflows and interactions are described below.

As the owner of an enhancement, issue, or pull request (PR), it is your responsibility to ensure release milestone requirements are met. Automation and the Release Team will be in contact with you if updates are required, but inaction can result in your work being removed from the milestone. Additional requirements exist when the target milestone is a prior release (see cherry pick process for more information).

TL;DR

If you want your PR to get merged, it needs the following required labels and milestones, represented here by the Prow /commands it would take to add them:

Normal Dev (Weeks 1-11)

  • /sig {name}
  • /kind {type}
  • /lgtm
  • /approved

Code Freeze (Weeks 12-14)

  • /milestone {v1.y}
  • /sig {name}
  • /kind {bug, failing-test}
  • /lgtm
  • /approved

Post-Release (Weeks 14+)

Return to 'Normal Dev' phase requirements:

  • /sig {name}
  • /kind {type}
  • /lgtm
  • /approved

Merges into the 1.y branch are now via cherry picks, approved by Release Managers.

In the past, there was a requirement for a milestone-targeted pull requests to have an associated GitHub issue opened, but this is no longer the case. Features or enhancements are effectively GitHub issues or KEPs which lead to subsequent PRs.

The general labeling process should be consistent across artifact types.

Definitions

  • issue owners: Creator, assignees, and user who moved the issue into a release milestone

  • Release Team: Each Kubernetes release has a team doing project management tasks described here.

    The contact info for the team associated with any given release can be found here.

  • Y days: Refers to business days

  • enhancement: see "Is My Thing an Enhancement?"

  • Enhancements Freeze: the deadline by which KEPs have to be completed in order for enhancements to be part of the current release

  • Exception Request: The process of requesting an extension on the deadline for a particular Enhancement

  • Code Freeze: The period of ~4 weeks before the final release date, during which only critical bug fixes are merged into the release.

  • Pruning: The process of removing an Enhancement from a release milestone if it is not fully implemented or is otherwise considered not stable.

  • release milestone: semantic version string or GitHub milestone referring to a release MAJOR.MINOR vX.Y version.

    See also release versioning.

  • release branch: Git branch release-X.Y created for the vX.Y milestone.

    Created at the time of the vX.Y-rc.0 release and maintained after the release for approximately 12 months with vX.Y.Z patch releases.

    Note: releases 1.19 and newer receive 1 year of patch release support, and releases 1.18 and earlier received 9 months of patch release support.

The Release Cycle

Image of one Kubernetes release cycle

Kubernetes releases currently happen approximately three times per year.

The release process can be thought of as having three main phases:

  • Enhancement Definition
  • Implementation
  • Stabilization

But in reality, this is an open source and agile project, with feature planning and implementation happening at all times. Given the project scale and globally distributed developer base, it is critical to project velocity to not rely on a trailing stabilization phase and rather have continuous integration testing which ensures the project is always stable so that individual commits can be flagged as having broken something.

With ongoing feature definition through the year, some set of items will bubble up as targeting a given release. Enhancements Freeze starts ~4 weeks into release cycle. By this point all intended feature work for the given release has been defined in suitable planning artifacts in conjunction with the Release Team's Enhancements Lead.

After Enhancements Freeze, tracking milestones on PRs and issues is important. Items within the milestone are used as a punchdown list to complete the release. On issues, milestones must be applied correctly, via triage by the SIG, so that Release Team can track bugs and enhancements (any enhancement-related issue needs a milestone).

There is some automation in place to help automatically assign milestones to PRs.

This automation currently applies to the following repos:

  • kubernetes/enhancements
  • kubernetes/kubernetes
  • kubernetes/release
  • kubernetes/sig-release
  • kubernetes/test-infra

At creation time, PRs against the master branch need humans to hint at which milestone they might want the PR to target. Once merged, PRs against the master branch have milestones auto-applied so from that time onward human management of that PR's milestone is less necessary. On PRs against release branches, milestones are auto-applied when the PR is created so no human management of the milestone is ever necessary.

Any other effort that should be tracked by the Release Team that doesn't fall under that automation umbrella should be have a milestone applied.

Implementation and bug fixing is ongoing across the cycle, but culminates in a code freeze period.

Code Freeze starts in week ~12 and continues for ~2 weeks. Only critical bug fixes are accepted into the release codebase during this time.

There are approximately two weeks following Code Freeze, and preceding release, during which all remaining critical issues must be resolved before release. This also gives time for documentation finalization.

When the code base is sufficiently stable, the master branch re-opens for general development and work begins there for the next release milestone. Any remaining modifications for the current release are cherry picked from master back to the release branch. The release is built from the release branch.

Each release is part of a broader Kubernetes lifecycle:

Image of Kubernetes release lifecycle spanning three releases

Removal Of Items From The Milestone

Before getting too far into the process for adding an item to the milestone, please note:

Members of the Release Team may remove issues from the milestone if they or the responsible SIG determine that the issue is not actually blocking the release and is unlikely to be resolved in a timely fashion.

Members of the Release Team may remove PRs from the milestone for any of the following, or similar, reasons:

  • PR is potentially de-stabilizing and is not needed to resolve a blocking issue
  • PR is a new, late feature PR and has not gone through the enhancements process or the exception process
  • There is no responsible SIG willing to take ownership of the PR and resolve any follow-up issues with it
  • PR is not correctly labelled
  • Work has visibly halted on the PR and delivery dates are uncertain or late

While members of the Release Team will help with labelling and contacting SIG(s), it is the responsibility of the submitter to categorize PRs, and to secure support from the relevant SIG to guarantee that any breakage caused by the PR will be rapidly resolved.

Where additional action is required, an attempt at human to human escalation will be made by the Release Team through the following channels:

  • Comment in GitHub mentioning the SIG team and SIG members as appropriate for the issue type
  • Emailing the SIG mailing list
    • bootstrapped with group email addresses from the community sig list
    • optionally also directly addressing SIG leadership or other SIG members
  • Messaging the SIG's Slack channel
    • bootstrapped with the slackchannel and SIG leadership from the community sig list
    • optionally directly "@" mentioning SIG leadership or others by handle

Adding An Item To The Milestone

Milestone Maintainers

The members of the milestone-maintainers GitHub team are entrusted with the responsibility of specifying the release milestone on GitHub artifacts.

This group is maintained by SIG Release and has representation from the various SIGs' leadership.

Feature additions

Feature planning and definition takes many forms today, but a typical example might be a large piece of work described in a KEP, with associated task issues in GitHub. When the plan has reached an implementable state and work is underway, the enhancement or parts thereof are targeted for an upcoming milestone by creating GitHub issues and marking them with the Prow "/milestone" command.

For the first ~4 weeks into the release cycle, the Release Team's Enhancements Lead will interact with SIGs and feature owners via GitHub, Slack, and SIG meetings to capture all required planning artifacts.

If you have an enhancement to target for an upcoming release milestone, begin a conversation with your SIG leadership and with that release's Enhancements Lead.

Issue additions

Issues are marked as targeting a milestone via the Prow "/milestone" command.

The Release Team's Bug Triage Lead and overall community watch incoming issues and triage them, as described in the contributor guide section on issue triage.

Marking issues with the milestone provides the community better visibility regarding when an issue was observed and by when the community feels it must be resolved. During Code Freeze, a milestone must be set to merge a PR.

An open issue is no longer required for a PR, but open issues and associated PRs should have synchronized labels. For example a high priority bug issue might not have its associated PR merged if the PR is only marked as lower priority.

PR Additions

PRs are marked as targeting a milestone via the Prow "/milestone" command.

This is a blocking requirement during Code Freeze as described above.

Other Required Labels

Here is the list of labels and their use and purpose.

SIG Owner Label

The SIG owner label defines the SIG to which we escalate if a milestone issue is languishing or needs additional attention. If there are no updates after escalation, the issue may be automatically removed from the milestone.

These are added with the Prow "/sig" command. For example to add the label indicating SIG Storage is responsible, comment with /sig storage.

Priority Label

Priority labels are used to determine an escalation path before moving issues out of the release milestone. They are also used to determine whether or not a release should be blocked on the resolution of the issue.

  • priority/critical-urgent: Never automatically move out of a release milestone; continually escalate to contributor and SIG through all available channels.
    • considered a release blocking issue
    • requires daily updates from issue owners during Code Freeze
    • would require a patch release if left undiscovered until after the minor release
  • priority/important-soon: Escalate to the issue owners and SIG owner; move out of milestone after several unsuccessful escalation attempts.
    • not considered a release blocking issue
    • would not require a patch release
    • will automatically be moved out of the release milestone at Code Freeze after a 4 day grace period
  • priority/important-longterm: Escalate to the issue owners; move out of the milestone after 1 attempt.
    • even less urgent / critical than priority/important-soon
    • moved out of milestone more aggressively than priority/important-soon

Issue/PR Kind Label

The issue kind is used to help identify the types of changes going into the release over time. This may allow the Release Team to develop a better understanding of what sorts of issues we would miss with a faster release cadence.

For release targeted issues, including pull requests, one of the following issue kind labels must be set:

  • kind/api-change: Adds, removes, or changes an API
  • kind/bug: Fixes a newly discovered bug.
  • kind/cleanup: Adding tests, refactoring, fixing old bugs.
  • kind/design: Related to design
  • kind/documentation: Adds documentation
  • kind/failing-test: CI test case is failing consistently.
  • kind/feature: New functionality.
  • kind/flake: CI test case is showing intermittent failures.

3 - Patch Releases

Schedule and team contact information for Kubernetes patch releases.

For general information about Kubernetes release cycle, see the release process description.

Cadence

Our typical patch release cadence is monthly. It is commonly a bit faster (1 to 2 weeks) for the earliest patch releases after a 1.X minor release. Critical bug fixes may cause a more immediate release outside of the normal cadence. We also aim to not make releases during major holiday periods.

Contact

See the Release Managers page for full contact details on the Patch Release Team.

Please give us a business day to respond - we may be in a different timezone!

In between releases the team is looking at incoming cherry pick requests on a weekly basis. The team will get in touch with submitters via GitHub PR, SIG channels in Slack, and direct messages in Slack and email if there are questions on the PR.

Cherry picks

Please follow the cherry pick process.

Cherry picks must be merge-ready in GitHub with proper labels (e.g., approved, lgtm, release-note) and passing CI tests ahead of the cherry pick deadline. This is typically two days before the target release, but may be more. Earlier PR readiness is better, as we need time to get CI signal after merging your cherry picks ahead of the actual release.

Cherry pick PRs which miss merge criteria will be carried over and tracked for the next patch release.

Support Period

In accordance with the yearly support KEP, the Kubernetes Community will support active patch release series for a period of roughly fourteen (14) months.

The first twelve months of this timeframe will be considered the standard period.

Towards the end of the twelve month, the following will happen:

  • Release Managers will cut a release
  • The patch release series will enter maintenance mode

During the two-month maintenance mode period, Release Managers may cut additional maintenance releases to resolve:

  • CVEs (under the advisement of the Security Response Committee)
  • dependency issues (including base image updates)
  • critical core component issues

At the end of the two-month maintenance mode period, the patch release series will be considered EOL (end of life) and cherry picks to the associated branch are to be closed soon afterwards.

Note that the 28th of the month was chosen for maintenance mode and EOL target dates for simplicity (every month has it).

Upcoming Monthly Releases

Timelines may vary with the severity of bug fixes, but for easier planning we will target the following monthly release points. Unplanned, critical releases may also occur in between these.

Monthly Patch Release Cherry Pick Deadline Target date
December 2022 2022-12-02 2022-12-07
January 2023 2023-01-13 2023-01-18
February 2023 2023-02-10 2023-02-15

Detailed Release History for Active Branches

1.25

Next patch release is 1.25.5.

1.25 enters maintenance mode on and End of Life is on .

Patch Release Cherry Pick Deadline Target Date Note
1.25.4 2022-11-04 2022-11-09
1.25.3 2022-10-07 2022-10-12
1.25.2 2022-09-20 2022-09-21 Out-of-Band release to fix the regression introduced in 1.25.1
1.25.1 2022-09-09 2022-09-14 Regression

1.24

Next patch release is 1.24.9.

1.24 enters maintenance mode on and End of Life is on .

Patch Release Cherry Pick Deadline Target Date Note
1.24.8 2022-11-04 2022-11-09
1.24.7 2022-10-07 2022-10-12
1.24.6 2022-09-20 2022-09-21 Out-of-Band release to fix the regression introduced in 1.24.5
1.24.5 2022-09-09 2022-09-14 Regression
1.24.4 2022-08-12 2022-08-17
1.24.3 2022-07-08 2022-07-13
1.24.2 2022-06-10 2022-06-15
1.24.1 2022-05-20 2022-05-24

1.23

Next patch release is 1.23.15.

1.23 enters maintenance mode on and End of Life is on .

Patch Release Cherry Pick Deadline Target Date Note
1.23.14 2022-11-04 2022-11-09
1.23.13 2022-10-07 2022-10-12
1.23.12 2022-09-20 2022-09-21 Out-of-Band release to fix the regression introduced in 1.23.11
1.23.11 2022-09-09 2022-09-14 Regression
1.23.10 2022-08-12 2022-08-17
1.23.9 2022-07-08 2022-07-13
1.23.8 2022-06-10 2022-06-15
1.23.7 2022-05-20 2022-05-24
1.23.6 2022-04-08 2022-04-13
1.23.5 2022-03-11 2022-03-16
1.23.4 2022-02-11 2022-02-16
1.23.3 2022-01-24 2022-01-25 Out-of-Band Release
1.23.2 2022-01-14 2022-01-19
1.23.1 2021-12-14 2021-12-16

1.22

Next patch release is 1.22.16.

The 1.22 release is in maintenance mode. As per the support policy, 1.22.16 will be released only if there are critical and/or security issues.

1.22 enters maintenance mode on and End of Life is on .

Patch Release Cherry Pick Deadline Target Date Note
1.22.15 2022-09-20 2022-09-21 Out-of-Band release to fix the regression introduced in 1.22.14
1.22.14 2022-09-09 2022-09-14 Regression
1.22.13 2022-08-12 2022-08-17
1.22.12 2022-07-08 2022-07-13
1.22.11 2022-06-10 2022-06-15
1.22.10 2022-05-20 2022-05-24
1.22.9 2022-04-08 2022-04-13
1.22.8 2022-03-11 2022-03-16
1.22.7 2022-02-11 2022-02-16
1.22.6 2022-01-14 2022-01-19
1.22.5 2021-12-10 2021-12-15
1.22.4 2021-11-12 2021-11-17
1.22.3 2021-10-22 2021-10-27
1.22.2 2021-09-10 2021-09-15
1.22.1 2021-08-16 2021-08-19

Non-Active Branch history

These releases are no longer supported.

Minor Version Final Patch Release End Of Life Date Note
1.21 1.21.14 2022-06-28
1.20 1.20.15 2022-02-28
1.19 1.19.16 2021-10-28
1.18 1.18.20 2021-06-18 Created to solve regression introduced in 1.18.19
1.18 1.18.19 2021-05-12 Regression
1.17 1.17.17 2021-01-13
1.16 1.16.15 2020-09-02
1.15 1.15.12 2020-05-06
1.14 1.14.10 2019-12-11
1.13 1.13.12 2019-10-15
1.12 1.12.10 2019-07-08
1.11 1.11.10 2019-05-01
1.10 1.10.13 2019-02-13
1.9 1.9.11 2018-09-29
1.8 1.8.15 2018-07-12
1.7 1.7.16 2018-04-04
1.6 1.6.13 2017-11-23
1.5 1.5.8 2017-10-01
1.4 1.4.12 2017-04-21
1.3 1.3.10 2016-11-01
1.2 1.2.7 2016-10-23

4 - Release Managers

"Release Managers" is an umbrella term that encompasses the set of Kubernetes contributors responsible for maintaining release branches and creating releases by using the tools SIG Release provides.

The responsibilities of each role are described below.

Contact

Mailing List Slack Visibility Usage Membership
release-managers@kubernetes.io #release-management (channel) / @release-managers (user group) Public Public discussion for Release Managers All Release Managers (including Associates, Build Admins, and SIG Chairs)
release-managers-private@kubernetes.io N/A Private Private discussion for privileged Release Managers Release Managers, SIG Release leadership
security-release-team@kubernetes.io #security-release-team (channel) / @security-rel-team (user group) Private Security release coordination with the Security Response Committee security-discuss-private@kubernetes.io, release-managers-private@kubernetes.io

Security Embargo Policy

Some information about releases is subject to embargo and we have defined policy about how those embargoes are set. Please refer to the Security Embargo Policy for more information.

Handbooks

NOTE: The Patch Release Team and Branch Manager handbooks will be de-duplicated at a later date.

Release Managers

Note: The documentation might refer to the Patch Release Team and the Branch Management role. Those two roles were consolidated into the Release Managers role.

Minimum requirements for Release Managers and Release Manager Associates are:

  • Familiarity with basic Unix commands and able to debug shell scripts.
  • Familiarity with branched source code workflows via git and associated git command line invocations.
  • General knowledge of Google Cloud (Cloud Build and Cloud Storage).
  • Open to seeking help and communicating clearly.
  • Kubernetes Community membership

Release Managers are responsible for:

  • Coordinating and cutting Kubernetes releases:
  • Maintaining the release branches:
    • Reviewing cherry picks
    • Ensuring the release branch stays healthy and that no unintended patch gets merged
  • Mentoring the Release Manager Associates group
  • Actively developing features and maintaining the code in k/release
  • Supporting Release Manager Associates and contributors through actively participating in the Buddy program
    • Check in monthly with Associates and delegate tasks, empower them to cut releases, and mentor
    • Being available to support Associates in onboarding new contributors e.g., answering questions and suggesting appropriate work for them to do

This team at times works in close conjunction with the Security Response Committee and therefore should abide by the guidelines set forth in the Security Release Process.

GitHub Access Controls: @kubernetes/release-managers

GitHub Mentions: @kubernetes/release-engineering

Becoming a Release Manager

To become a Release Manager, one must first serve as a Release Manager Associate. Associates graduate to Release Manager by actively working on releases over several cycles and:

  • demonstrating the willingness to lead
  • tag-teaming with Release Managers on patches, to eventually cut a release independently
    • because releases have a limiting function, we also consider substantial contributions to image promotion and other core Release Engineering tasks
  • questioning how Associates work, suggesting improvements, gathering feedback, and driving change
  • being reliable and responsive
  • leaning into advanced work that requires Release Manager-level access and privileges to complete

Release Manager Associates

Release Manager Associates are apprentices to the Release Managers, formerly referred to as Release Manager shadows. They are responsible for:

  • Patch release work, cherry pick review
  • Contributing to k/release: updating dependencies and getting used to the source codebase
  • Contributing to the documentation: maintaining the handbooks, ensuring that release processes are documented
  • With help from a release manager: working with the Release Team during the release cycle and cutting Kubernetes releases
  • Seeking opportunities to help with prioritization and communication
    • Sending out pre-announcements and updates about patch releases
    • Updating the calendar, helping with the release dates and milestones from the release cycle timeline
  • Through the Buddy program, onboarding new contributors and pairing up with them on tasks

GitHub Mentions: @kubernetes/release-engineering

Becoming a Release Manager Associate

Contributors can become Associates by demonstrating the following:

  • consistent participation, including 6-12 months of active release engineering-related work
  • experience fulfilling a technical lead role on the Release Team during a release cycle
    • this experience provides a solid baseline for understanding how SIG Release works overall—including our expectations regarding technical skills, communications/responsiveness, and reliability
  • working on k/release items that improve our interactions with Testgrid, cleaning up libraries, etc.
    • these efforts require interacting and pairing with Release Managers and Associates

Build Admins

Build Admins are (currently) Google employees with the requisite access to Google build systems/tooling to publish deb/rpm packages on behalf of the Kubernetes project. They are responsible for:

  • Building, signing, and publishing the deb/rpm packages
  • Being the interlock with Release Managers (and Associates) on the final steps of each minor (1.Y) and patch (1.Y.Z) release

GitHub team: @kubernetes/build-admins

SIG Release Leads

SIG Release Chairs and Technical Leads are responsible for:

  • The governance of SIG Release
  • Leading knowledge exchange sessions for Release Managers and Associates
  • Coaching on leadership and prioritization

They are mentioned explicitly here as they are owners of the various communications channels and permissions groups (GitHub teams, GCP access) for each role. As such, they are highly privileged community members and privy to some private communications, which can at times relate to Kubernetes security disclosures.

GitHub team: @kubernetes/sig-release-leads

Chairs

Technical Leads


Past Branch Managers, can be found in the releases directory of the kubernetes/sig-release repository within release-x.y/release_team.md.

Example: 1.15 Release Team

5 - Notes

Kubernetes release notes.

Release notes can be found by reading the Changelog that matches your Kubernetes version. View the changelog for 1.25 on GitHub.

Alternately, release notes can be searched and filtered online at: relnotes.k8s.io. View filtered release notes for 1.25 on relnotes.k8s.io.

6 - Version Skew Policy

The maximum version skew supported between various Kubernetes components.

This document describes the maximum version skew supported between various Kubernetes components. Specific cluster deployment tools may place additional restrictions on version skew.

Supported versions

Kubernetes versions are expressed as x.y.z, where x is the major version, y is the minor version, and z is the patch version, following Semantic Versioning terminology. For more information, see Kubernetes Release Versioning.

The Kubernetes project maintains release branches for the most recent three minor releases (1.25, 1.24, 1.23). Kubernetes 1.19 and newer receive approximately 1 year of patch support. Kubernetes 1.18 and older received approximately 9 months of patch support.

Applicable fixes, including security fixes, may be backported to those three release branches, depending on severity and feasibility. Patch releases are cut from those branches at a regular cadence, plus additional urgent releases, when required.

The Release Managers group owns this decision.

For more information, see the Kubernetes patch releases page.

Supported version skew

kube-apiserver

In highly-available (HA) clusters, the newest and oldest kube-apiserver instances must be within one minor version.

Example:

  • newest kube-apiserver is at 1.25
  • other kube-apiserver instances are supported at 1.25 and 1.24

kubelet

kubelet must not be newer than kube-apiserver, and may be up to two minor versions older.

Example:

  • kube-apiserver is at 1.25
  • kubelet is supported at 1.25, 1.24, and 1.23

Example:

  • kube-apiserver instances are at 1.25 and 1.24
  • kubelet is supported at 1.24, and 1.23 (1.25 is not supported because that would be newer than the kube-apiserver instance at version 1.24)

kube-controller-manager, kube-scheduler, and cloud-controller-manager

kube-controller-manager, kube-scheduler, and cloud-controller-manager must not be newer than the kube-apiserver instances they communicate with. They are expected to match the kube-apiserver minor version, but may be up to one minor version older (to allow live upgrades).

Example:

  • kube-apiserver is at 1.25
  • kube-controller-manager, kube-scheduler, and cloud-controller-manager are supported at 1.25 and 1.24

Example:

  • kube-apiserver instances are at 1.25 and 1.24
  • kube-controller-manager, kube-scheduler, and cloud-controller-manager communicate with a load balancer that can route to any kube-apiserver instance
  • kube-controller-manager, kube-scheduler, and cloud-controller-manager are supported at 1.24 (1.25 is not supported because that would be newer than the kube-apiserver instance at version 1.24)

kubectl

kubectl is supported within one minor version (older or newer) of kube-apiserver.

Example:

  • kube-apiserver is at 1.25
  • kubectl is supported at 1.26, 1.25, and 1.24

Example:

  • kube-apiserver instances are at 1.25 and 1.24
  • kubectl is supported at 1.25 and 1.24 (other versions would be more than one minor version skewed from one of the kube-apiserver components)

Supported component upgrade order

The supported version skew between components has implications on the order in which components must be upgraded. This section describes the order in which components must be upgraded to transition an existing cluster from version 1.24 to version 1.25.

Optionally, when preparing to upgrade, the Kubernetes project recommends that you do the following to benefit from as many regression and bug fixes as possible during your upgrade:

  • Ensure that components are on the most recent patch version of your current minor version.
  • Upgrade components to the most recent patch version of the target minor version.

For example, if you're running version 1.24, ensure that you're on the most recent patch version. Then, upgrade to the most recent patch version of 1.25.

kube-apiserver

Pre-requisites:

  • In a single-instance cluster, the existing kube-apiserver instance is 1.24
  • In an HA cluster, all kube-apiserver instances are at 1.24 or 1.25 (this ensures maximum skew of 1 minor version between the oldest and newest kube-apiserver instance)
  • The kube-controller-manager, kube-scheduler, and cloud-controller-manager instances that communicate with this server are at version 1.24 (this ensures they are not newer than the existing API server version, and are within 1 minor version of the new API server version)
  • kubelet instances on all nodes are at version 1.24 or 1.23 (this ensures they are not newer than the existing API server version, and are within 2 minor versions of the new API server version)
  • Registered admission webhooks are able to handle the data the new kube-apiserver instance will send them:
    • ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects are updated to include any new versions of REST resources added in 1.25 (or use the matchPolicy: Equivalent option available in v1.15+)
    • The webhooks are able to handle any new versions of REST resources that will be sent to them, and any new fields added to existing versions in 1.25

Upgrade kube-apiserver to 1.25

kube-controller-manager, kube-scheduler, and cloud-controller-manager

Pre-requisites:

  • The kube-apiserver instances these components communicate with are at 1.25 (in HA clusters in which these control plane components can communicate with any kube-apiserver instance in the cluster, all kube-apiserver instances must be upgraded before upgrading these components)

Upgrade kube-controller-manager, kube-scheduler, and cloud-controller-manager to 1.25. There is no required upgrade order between kube-controller-manager, kube-scheduler, and cloud-controller-manager. You can upgrade these components in any order, or even simultaneously.

kubelet

Pre-requisites:

  • The kube-apiserver instances the kubelet communicates with are at 1.25

Optionally upgrade kubelet instances to 1.25 (or they can be left at 1.24 or 1.23)

kube-proxy

  • kube-proxy must be the same minor version as kubelet on the node.
  • kube-proxy must not be newer than kube-apiserver.
  • kube-proxy must be at most two minor versions older than kube-apiserver.

Example:

If kube-proxy version is 1.23:

  • kubelet version must be at the same minor version as 1.23.
  • kube-apiserver version must be between 1.23 and 1.25, inclusive.