Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

core: fix host cleanup jobs to read flags correctly. #14631

Merged
merged 1 commit into from
Aug 22, 2024

Conversation

sp98
Copy link
Contributor

@sp98 sp98 commented Aug 22, 2024

Cleanup job cli was not reading the flags correctly. As a result, dataDirHostPath was not getting cleaned up.

Resolves #14626

Checklist:

  • Commit Message Formatting: Commit titles and messages follow guidelines in the developer guide.
  • Reviewed the developer guide on Submitting a Pull Request
  • Pending release notes updated with breaking and/or notable changes for the next minor release.
  • Documentation has been updated, if necessary.
  • Unit tests have been added, if necessary.
  • Integration tests have been added, if necessary.

@sp98
Copy link
Contributor Author

sp98 commented Aug 22, 2024

cleanup job logs

oc logs cluster-cleanup-job-minikube-dhjd9  -n rook-ceph
2024/08/22 14:14:00 maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
2024-08-22 14:14:00.061038 I | rookcmd: starting Rook v1.15.0-alpha.0.94.ga6a80ee91 with arguments '/usr/local/bin/rook ceph clean host'
2024-08-22 14:14:00.061089 I | rookcmd: flag values: --cluster-fsid=820821e2-46d5-4858-8e80-9d4ae7e0d5a2, --data-dir-host-path=/var/lib/rook, --help=false, --log-level=DEBUG, --mon-secret=*****, --namespace-dir=rook-ceph, --sanitize-data-source=zero, --sanitize-iteration=1, --sanitize-method=quick
2024-08-22 14:14:00.061097 I | cephcmd: starting cluster clean up
2024-08-22 14:14:00.063659 I | cleanup: successfully cleaned up "/var/lib/rook/rook-ceph" directory
2024-08-22 14:14:00.064486 I | cleanup: successfully cleaned up the mon directory "/var/lib/rook/mon-a" on the dataDirHostPath "/var/lib/rook"
2024-08-22 14:14:00.065378 I | cleanup: successfully cleaned up the mon directory "/var/lib/rook/mon-b" on the dataDirHostPath "/var/lib/rook"
2024-08-22 14:14:00.067609 I | cleanup: successfully cleaned up the mon directory "/var/lib/rook/mon-c" on the dataDirHostPath "/var/lib/rook"
2024-08-22 14:14:00.068508 D | exec: Running command: stdbuf -oL ceph-volume --log-path /tmp/ceph-log lvm list  --format json
2024-08-22 14:14:00.448881 D | cephosd: {}
2024-08-22 14:14:00.448902 I | cephosd: 0 ceph-volume lvm osd devices configured on this node
2024-08-22 14:14:00.448921 D | exec: Running command: stdbuf -oL ceph-volume --log-path /tmp/ceph-log raw list --format json
2024-08-22 14:14:00.969791 D | cephosd: {
    "10b0fb57-4964-45dc-83e7-70aae19ec995": {
        "ceph_fsid": "820821e2-46d5-4858-8e80-9d4ae7e0d5a2",
        "device": "/dev/vdb",
        "osd_id": 4,
        "osd_uuid": "10b0fb57-4964-45dc-83e7-70aae19ec995",
        "type": "bluestore"
    },
    "18ef8cd4-46b7-4fd1-b963-54821554cd8b": {
        "ceph_fsid": "820821e2-46d5-4858-8e80-9d4ae7e0d5a2",
        "device": "/dev/vdg",
        "osd_id": 3,
        "osd_uuid": "18ef8cd4-46b7-4fd1-b963-54821554cd8b",
        "type": "bluestore"
    },
    "26c3081d-e0bb-409c-bd19-1d6859173275": {
        "ceph_fsid": "820821e2-46d5-4858-8e80-9d4ae7e0d5a2",
        "device": "/dev/vdd",
        "osd_id": 0,
        "osd_uuid": "26c3081d-e0bb-409c-bd19-1d6859173275",
        "type": "bluestore"
    },
    "3449cf75-a770-4d05-8f21-355d7e90b7e6": {
        "ceph_fsid": "820821e2-46d5-4858-8e80-9d4ae7e0d5a2",
        "device": "/dev/vdc",
        "osd_id": 5,
        "osd_uuid": "3449cf75-a770-4d05-8f21-355d7e90b7e6",
        "type": "bluestore"
    },
    "a7724888-95c9-4068-95b5-90d387ef2605": {
        "ceph_fsid": "820821e2-46d5-4858-8e80-9d4ae7e0d5a2",
        "device": "/dev/vde",
        "osd_id": 1,
        "osd_uuid": "a7724888-95c9-4068-95b5-90d387ef2605",
        "type": "bluestore"
    },
    "d4773a9f-2b55-459b-9af2-33dfce297d15": {
        "ceph_fsid": "820821e2-46d5-4858-8e80-9d4ae7e0d5a2",
        "device": "/dev/vdf",
        "osd_id": 2,
        "osd_uuid": "d4773a9f-2b55-459b-9af2-33dfce297d15",
        "type": "bluestore"
    }
}
2024-08-22 14:14:00.970028 I | cephosd: 6 ceph-volume raw osd devices configured on this node
2024-08-22 14:14:00.970097 I | cleanup: sanitizing osd 4 disk "/dev/vdb"
2024-08-22 14:14:00.970164 I | cleanup: sanitizing osd 3 disk "/dev/vdg"
2024-08-22 14:14:00.970292 I | cleanup: sanitizing osd 0 disk "/dev/vdd"
2024-08-22 14:14:00.970417 I | cleanup: sanitizing osd 5 disk "/dev/vdc"
2024-08-22 14:14:00.970450 I | cleanup: sanitizing osd 1 disk "/dev/vde"
2024-08-22 14:14:00.970524 I | cleanup: sanitizing osd 2 disk "/dev/vdf"
2024-08-22 14:14:00.970630 D | exec: Running command: shred --size=10M --random-source=/dev/zero --force --verbose --iterations=1 /dev/vdb
2024-08-22 14:14:00.970916 D | exec: Running command: shred --size=10M --random-source=/dev/zero --force --verbose --iterations=1 /dev/vdf
2024-08-22 14:14:00.972789 D | exec: Running command: shred --size=10M --random-source=/dev/zero --force --verbose --iterations=1 /dev/vdg
2024-08-22 14:14:00.974432 D | exec: Running command: shred --size=10M --random-source=/dev/zero --force --verbose --iterations=1 /dev/vdd
2024-08-22 14:14:00.975001 D | exec: Running command: shred --size=10M --random-source=/dev/zero --force --verbose --iterations=1 /dev/vdc
2024-08-22 14:14:00.976996 D | exec: Running command: shred --size=10M --random-source=/dev/zero --force --verbose --iterations=1 /dev/vde
2024-08-22 14:14:01.010906 I | cleanup: shred: /dev/vdb: pass 1/1 (random)...
2024-08-22 14:14:01.011104 I | cleanup: successfully sanitized osd disk "/dev/vdb"
2024-08-22 14:14:01.011915 I | cleanup: shred: /dev/vdc: pass 1/1 (random)...
2024-08-22 14:14:01.011932 I | cleanup: successfully sanitized osd disk "/dev/vdc"
2024-08-22 14:14:01.013172 I | cleanup: shred: /dev/vdf: pass 1/1 (random)...
2024-08-22 14:14:01.013397 I | cleanup: successfully sanitized osd disk "/dev/vdf"
2024-08-22 14:14:01.016225 I | cleanup: shred: /dev/vdd: pass 1/1 (random)...
2024-08-22 14:14:01.016243 I | cleanup: successfully sanitized osd disk "/dev/vdd"
2024-08-22 14:14:01.016447 I | cleanup: shred: /dev/vde: pass 1/1 (random)...
2024-08-22 14:14:01.016459 I | cleanup: successfully sanitized osd disk "/dev/vde"
2024-08-22 14:14:01.020038 I | cleanup: shred: /dev/vdg: pass 1/1 (random)...
2024-08-22 14:14:01.020099 I | cleanup: successfully sanitized osd disk "/dev/vdg"

Cleanup job cli was not reading the flags correctly.
As a result, dataDirHostPath was never cleaned up.

Signed-off-by: sp98 <sapillai@redhat.com>
@sp98 sp98 force-pushed the fix-cleanup-jobs branch from a6a80ee to a44a222 Compare August 22, 2024 14:17
@sp98 sp98 changed the title core: fix reading flags in cleanup job cmd. core: fix host cleanup jobs to read flags correctly. Aug 22, 2024
@sp98 sp98 requested a review from travisn August 22, 2024 15:35
@BlaineEXE
Copy link
Member

Backports?

@travisn travisn merged commit bc78f1d into rook:master Aug 22, 2024
54 checks passed
BlaineEXE added a commit that referenced this pull request Aug 26, 2024
core: fix host cleanup jobs to read flags correctly. (backport #14631)
BlaineEXE added a commit that referenced this pull request Aug 27, 2024
core: fix host cleanup jobs to read flags correctly. (backport #14631)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

dataDirHostPath is not cleaned up automatically
3 participants