Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

core: fix Ceph monitor deployment in separate zones #14636

Merged
merged 1 commit into from
Aug 23, 2024

Conversation

BenoitKnecht
Copy link
Contributor

@BenoitKnecht BenoitKnecht commented Aug 23, 2024

Previously, Rook would fail to create monitors for a CephCluster spec that required them to be located in distinct failure domains:

spec:
  mon:
    count: 3
    failureDomainLabel: topology.kubernetes.io/zone
    zones:
    - eu-central-1a
    - eu-central-1b
    - eu-central-1c

This was due to the fact that Rook was mistakenly getting the failureDomainLabel from spec.mon.stretchCluster.failureDomainLabel.

This commit ensures spec.mon.failureDomainLabel is being used instead, which fixes the cluster deployment for the spec above.

Checklist:

  • Commit Message Formatting: Commit titles and messages follow guidelines in the developer guide.
  • Reviewed the developer guide on Submitting a Pull Request
  • Pending release notes updated with breaking and/or notable changes for the next minor release.
  • Documentation has been updated, if necessary.
  • Unit tests have been added, if necessary.
  • Integration tests have been added, if necessary.

Previously, Rook would fail to create monitors for a `CephCluster` spec that
required them to be located in distinct failure domains:

```yaml
spec:
  mon:
    count: 3
    failureDomainLabel: topology.kubernetes.io/zone
    zones:
    - eu-central-1a
    - eu-central-1b
    - eu-central-1c
```

This was due to the fact that Rook was mistakenly getting the
`failureDomainLabel` from `spec.mon.stretchCluster.failureDomainLabel`.

This commit ensures `spec.mon.failureDomainLabel` is being used instead, which
fixes the cluster deployment for the spec above.

Signed-off-by: Benoît Knecht <bknecht@protonmail.ch>
@BenoitKnecht BenoitKnecht force-pushed the fix-ceph-mon-failuredomainlabel branch from 626a4e1 to 1c45677 Compare August 23, 2024 07:10
@BenoitKnecht BenoitKnecht changed the title core: Fix Ceph monitor deployment in separate zones core: fix Ceph monitor deployment in separate zones Aug 23, 2024
Copy link
Member

@travisn travisn left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The fix looks good, thanks for submitting, I see this was introduced with #12384.
Note that there is another way to have the mons spread across zones as well, by specifying the node affinity, antiaffinity, etc, in the placement.mon

@BenoitKnecht
Copy link
Contributor Author

The fix looks good, thanks for submitting, I see this was introduced with #12384. Note that there is another way to have the mons spread across zones as well, by specifying the node affinity, antiaffinity, etc, in the placement.mon

Thanks for merging, and thanks for the suggestion, it's actually a much better way to achieve what I was trying to do. 🙂

mergify bot added a commit that referenced this pull request Aug 27, 2024
core: fix Ceph monitor deployment in separate zones (backport #14636)
mergify bot added a commit that referenced this pull request Aug 27, 2024
core: fix Ceph monitor deployment in separate zones (backport #14636)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants