Version in base suite: 16.2.11+ds-2 Base version: ceph_16.2.11+ds-2 Target version: ceph_16.2.15+ds-0+deb12u1 Base file: /srv/ftp-master.debian.org/ftp/pool/main/c/ceph/ceph_16.2.11+ds-2.dsc Target file: /srv/ftp-master.debian.org/policy/pool/main/c/ceph/ceph_16.2.15+ds-0+deb12u1.dsc /srv/release.debian.org/tmp/Msf9kTZzsp/ceph-16.2.15+ds/doc/cephfs/cephfs-top.png |binary ceph-16.2.15+ds/CMakeLists.txt | 2 ceph-16.2.15+ds/PendingReleaseNotes | 109 ceph-16.2.15+ds/admin/doc-requirements.txt | 2 ceph-16.2.15+ds/ceph.spec | 10 ceph-16.2.15+ds/ceph.spec.in | 4 ceph-16.2.15+ds/cmake/modules/BuildRocksDB.cmake | 5 ceph-16.2.15+ds/debian/changelog | 15 ceph-16.2.15+ds/debian/control | 1 ceph-16.2.15+ds/debian/patches/32bit-fixes.patch | 16 ceph-16.2.15+ds/debian/patches/CVE-2022-3650_1_ceph-crash_drop_privleges_to_run_as_ceph_user_rather_than_root.patch | 65 ceph-16.2.15+ds/debian/patches/CVE-2022-3650_2_ceph-crash_fix_stderr_handling.patch | 26 ceph-16.2.15+ds/debian/patches/CVE-2024-48916.patch | 28 ceph-16.2.15+ds/debian/patches/bug1917414.patch | 143 ceph-16.2.15+ds/debian/patches/series | 4 ceph-16.2.15+ds/debian/watch | 2 ceph-16.2.15+ds/doc/architecture.rst | 2 ceph-16.2.15+ds/doc/ceph-volume/lvm/activate.rst | 6 ceph-16.2.15+ds/doc/ceph-volume/lvm/encryption.rst | 70 ceph-16.2.15+ds/doc/cephadm/compatibility.rst | 48 ceph-16.2.15+ds/doc/cephadm/host-management.rst | 7 ceph-16.2.15+ds/doc/cephadm/install.rst | 28 ceph-16.2.15+ds/doc/cephadm/operations.rst | 35 ceph-16.2.15+ds/doc/cephadm/services/index.rst | 87 ceph-16.2.15+ds/doc/cephadm/services/monitoring.rst | 16 ceph-16.2.15+ds/doc/cephadm/services/osd.rst | 2 ceph-16.2.15+ds/doc/cephadm/services/rgw.rst | 9 ceph-16.2.15+ds/doc/cephadm/troubleshooting.rst | 105 ceph-16.2.15+ds/doc/cephfs/administration.rst | 61 ceph-16.2.15+ds/doc/cephfs/cephfs-mirroring.rst | 175 ceph-16.2.15+ds/doc/cephfs/cephfs-shell.rst | 2 ceph-16.2.15+ds/doc/cephfs/cephfs-top.rst | 25 ceph-16.2.15+ds/doc/cephfs/client-auth.rst | 10 ceph-16.2.15+ds/doc/cephfs/disaster-recovery-experts.rst | 64 ceph-16.2.15+ds/doc/cephfs/fs-volumes.rst | 688 ceph-16.2.15+ds/doc/cephfs/health-messages.rst | 16 ceph-16.2.15+ds/doc/cephfs/mds-config-ref.rst | 19 ceph-16.2.15+ds/doc/cephfs/mount-using-fuse.rst | 3 ceph-16.2.15+ds/doc/cephfs/mount-using-kernel-driver.rst | 22 ceph-16.2.15+ds/doc/cephfs/nfs.rst | 12 ceph-16.2.15+ds/doc/cephfs/quota.rst | 62 ceph-16.2.15+ds/doc/cephfs/scrub.rst | 23 ceph-16.2.15+ds/doc/cephfs/snap-schedule.rst | 25 ceph-16.2.15+ds/doc/cephfs/troubleshooting.rst | 92 ceph-16.2.15+ds/doc/dev/ceph_krb_auth.rst | 10 ceph-16.2.15+ds/doc/dev/cephadm/developing-cephadm.rst | 2 ceph-16.2.15+ds/doc/dev/cephfs-mirroring.rst | 36 ceph-16.2.15+ds/doc/dev/cephfs-snapshots.rst | 5 ceph-16.2.15+ds/doc/dev/developer_guide/basic-workflow.rst | 56 ceph-16.2.15+ds/doc/dev/developer_guide/essentials.rst | 5 ceph-16.2.15+ds/doc/dev/developer_guide/tests-integration-tests.rst | 80 ceph-16.2.15+ds/doc/dev/network-encoding.rst | 3 ceph-16.2.15+ds/doc/dev/osd_internals/erasure_coding/jerasure.rst | 4 ceph-16.2.15+ds/doc/dev/osd_internals/past_intervals.rst | 93 ceph-16.2.15+ds/doc/glossary.rst | 109 ceph-16.2.15+ds/doc/images/zone-sync.svg |16425 ++++++---- ceph-16.2.15+ds/doc/index.rst | 9 ceph-16.2.15+ds/doc/install/index.rst | 43 ceph-16.2.15+ds/doc/man/8/ceph-objectstore-tool.rst | 6 ceph-16.2.15+ds/doc/man/8/ceph.rst | 2 ceph-16.2.15+ds/doc/man/8/cephfs-top.rst | 16 ceph-16.2.15+ds/doc/man/8/mount.ceph.rst | 10 ceph-16.2.15+ds/doc/man/8/rados.rst | 4 ceph-16.2.15+ds/doc/mgr/modules.rst | 1 ceph-16.2.15+ds/doc/mgr/nfs.rst | 28 ceph-16.2.15+ds/doc/mgr/prometheus.rst | 85 ceph-16.2.15+ds/doc/mgr/telemetry.rst | 21 ceph-16.2.15+ds/doc/rados/api/libcephsqlite.rst | 16 ceph-16.2.15+ds/doc/rados/configuration/auth-config-ref.rst | 325 ceph-16.2.15+ds/doc/rados/configuration/bluestore-config-ref.rst | 621 ceph-16.2.15+ds/doc/rados/configuration/ceph-conf.rst | 44 ceph-16.2.15+ds/doc/rados/configuration/common.rst | 205 ceph-16.2.15+ds/doc/rados/configuration/filestore-config-ref.rst | 163 ceph-16.2.15+ds/doc/rados/configuration/mon-config-ref.rst | 45 ceph-16.2.15+ds/doc/rados/configuration/mon-lookup-dns.rst | 18 ceph-16.2.15+ds/doc/rados/configuration/ms-ref.rst | 13 ceph-16.2.15+ds/doc/rados/configuration/msgr2.rst | 3 ceph-16.2.15+ds/doc/rados/configuration/osd-config-ref.rst | 2 ceph-16.2.15+ds/doc/rados/configuration/pool-pg-config-ref.rst | 42 ceph-16.2.15+ds/doc/rados/configuration/storage-devices.rst | 1 ceph-16.2.15+ds/doc/rados/operations/balancer.rst | 171 ceph-16.2.15+ds/doc/rados/operations/bluestore-migration.rst | 299 ceph-16.2.15+ds/doc/rados/operations/cache-tiering.rst | 4 ceph-16.2.15+ds/doc/rados/operations/control.rst | 6 ceph-16.2.15+ds/doc/rados/operations/crush-map.rst | 2 ceph-16.2.15+ds/doc/rados/operations/data-placement.rst | 72 ceph-16.2.15+ds/doc/rados/operations/devices.rst | 163 ceph-16.2.15+ds/doc/rados/operations/erasure-code-jerasure.rst | 8 ceph-16.2.15+ds/doc/rados/operations/erasure-code.rst | 168 ceph-16.2.15+ds/doc/rados/operations/health-checks.rst | 1360 ceph-16.2.15+ds/doc/rados/operations/monitoring-osd-pg.rst | 599 ceph-16.2.15+ds/doc/rados/operations/monitoring.rst | 451 ceph-16.2.15+ds/doc/rados/operations/operating.rst | 76 ceph-16.2.15+ds/doc/rados/operations/pg-concepts.rst | 2 ceph-16.2.15+ds/doc/rados/operations/pg-repair.rst | 103 ceph-16.2.15+ds/doc/rados/operations/stretch-mode.rst | 395 ceph-16.2.15+ds/doc/rados/operations/upmap.rst | 106 ceph-16.2.15+ds/doc/rados/operations/user-management.rst | 659 ceph-16.2.15+ds/doc/radosgw/dynamicresharding.rst | 5 ceph-16.2.15+ds/doc/radosgw/frontends.rst | 8 ceph-16.2.15+ds/doc/radosgw/keycloak.rst | 85 ceph-16.2.15+ds/doc/radosgw/multisite-sync-policy.rst | 2 ceph-16.2.15+ds/doc/radosgw/multisite.rst | 1173 ceph-16.2.15+ds/doc/radosgw/notifications.rst | 3 ceph-16.2.15+ds/doc/radosgw/placement.rst | 12 ceph-16.2.15+ds/doc/radosgw/s3.rst | 2 ceph-16.2.15+ds/doc/radosgw/s3select.rst | 67 ceph-16.2.15+ds/doc/radosgw/session-tags.rst | 11 ceph-16.2.15+ds/doc/rbd/iscsi-initiator-linux.rst | 87 ceph-16.2.15+ds/doc/rbd/rbd-exclusive-locks.rst | 90 ceph-16.2.15+ds/doc/start/documenting-ceph.rst | 354 ceph-16.2.15+ds/doc/start/get-involved.rst | 5 ceph-16.2.15+ds/doc/start/intro.rst | 26 ceph-16.2.15+ds/doc/start/os-recommendations.rst | 15 ceph-16.2.15+ds/install-deps.sh | 106 ceph-16.2.15+ds/make-dist | 5 ceph-16.2.15+ds/monitoring/ceph-mixin/dashboards/host.libsonnet | 27 ceph-16.2.15+ds/monitoring/ceph-mixin/dashboards/osd.libsonnet | 27 ceph-16.2.15+ds/monitoring/ceph-mixin/dashboards/rbd.libsonnet | 2 ceph-16.2.15+ds/monitoring/ceph-mixin/dashboards/rgw.libsonnet | 4 ceph-16.2.15+ds/monitoring/ceph-mixin/dashboards_out/ceph-cluster.json | 363 ceph-16.2.15+ds/monitoring/ceph-mixin/dashboards_out/host-details.json | 87 ceph-16.2.15+ds/monitoring/ceph-mixin/dashboards_out/osd-device-details.json | 12 ceph-16.2.15+ds/monitoring/ceph-mixin/dashboards_out/osds-overview.json | 85 ceph-16.2.15+ds/monitoring/ceph-mixin/dashboards_out/radosgw-detail.json | 6 ceph-16.2.15+ds/monitoring/ceph-mixin/dashboards_out/radosgw-overview.json | 20 ceph-16.2.15+ds/monitoring/ceph-mixin/dashboards_out/rbd-overview.json | 6 ceph-16.2.15+ds/monitoring/ceph-mixin/prometheus_alerts.libsonnet | 11 ceph-16.2.15+ds/monitoring/ceph-mixin/prometheus_alerts.yml | 12 ceph-16.2.15+ds/monitoring/ceph-mixin/tests_alerts/test_alerts.yml | 58 ceph-16.2.15+ds/qa/cephfs/conf/mds.yaml | 1 ceph-16.2.15+ds/qa/cephfs/mount/kclient/overrides/distro/stock/rhel_8.yaml | 2 ceph-16.2.15+ds/qa/cephfs/overrides/ignorelist_health.yaml | 10 ceph-16.2.15+ds/qa/cephfs/overrides/ignorelist_wrongly_marked_down.yaml | 6 ceph-16.2.15+ds/qa/distros/all/rhel_8.5.yaml | 6 ceph-16.2.15+ds/qa/distros/all/rhel_8.6.yaml | 6 ceph-16.2.15+ds/qa/distros/all/rhel_8.yaml | 2 ceph-16.2.15+ds/qa/distros/container-hosts/centos_8.stream_container_tools.yaml | 2 ceph-16.2.15+ds/qa/distros/container-hosts/rhel_8.4_container_tools_3.0.yaml | 13 ceph-16.2.15+ds/qa/distros/container-hosts/rhel_8.4_container_tools_rhel8.yaml | 13 ceph-16.2.15+ds/qa/distros/container-hosts/rhel_8.6_container_tools_3.0.yaml | 13 ceph-16.2.15+ds/qa/distros/container-hosts/rhel_8.6_container_tools_rhel8.yaml | 13 ceph-16.2.15+ds/qa/distros/podman/centos_8.stream_container_tools.yaml | 2 ceph-16.2.15+ds/qa/distros/podman/rhel_8.4_container_tools_3.0.yaml | 13 ceph-16.2.15+ds/qa/distros/podman/rhel_8.4_container_tools_rhel8.yaml | 13 ceph-16.2.15+ds/qa/distros/podman/rhel_8.6_container_tools_3.0.yaml | 13 ceph-16.2.15+ds/qa/distros/podman/rhel_8.6_container_tools_rhel8.yaml | 13 ceph-16.2.15+ds/qa/distros/supported-all-distro/rhel_8.yaml | 2 ceph-16.2.15+ds/qa/distros/supported-random-distro$/rhel_8.yaml | 2 ceph-16.2.15+ds/qa/distros/supported/rhel_latest.yaml | 2 ceph-16.2.15+ds/qa/rgw/ignore-pg-availability.yaml | 2 ceph-16.2.15+ds/qa/standalone/ceph-helpers.sh | 23 ceph-16.2.15+ds/qa/standalone/mon-stretch/mon-stretch-fail-recovery.sh | 148 ceph-16.2.15+ds/qa/standalone/mon-stretch/mon-stretch-uneven-crush-weights.sh | 145 ceph-16.2.15+ds/qa/suites/buildpackages/any/distros/rhel_8.5.yaml | 6 ceph-16.2.15+ds/qa/suites/buildpackages/any/distros/rhel_8.6.yaml | 6 ceph-16.2.15+ds/qa/suites/buildpackages/any/distros/rhel_8.yaml | 2 ceph-16.2.15+ds/qa/suites/buildpackages/tests/distros/rhel_8.5.yaml | 6 ceph-16.2.15+ds/qa/suites/buildpackages/tests/distros/rhel_8.6.yaml | 6 ceph-16.2.15+ds/qa/suites/buildpackages/tests/distros/rhel_8.yaml | 2 ceph-16.2.15+ds/qa/suites/crimson-rados/thrash/thrashosds-health.yaml | 12 ceph-16.2.15+ds/qa/suites/fs/32bits/conf/mds.yaml | 1 ceph-16.2.15+ds/qa/suites/fs/32bits/distro/rhel_8.yaml | 2 ceph-16.2.15+ds/qa/suites/fs/32bits/overrides/ignorelist_health.yaml | 10 ceph-16.2.15+ds/qa/suites/fs/32bits/overrides/ignorelist_wrongly_marked_down.yaml | 6 ceph-16.2.15+ds/qa/suites/fs/bugs/client_trim_caps/conf/mds.yaml | 1 ceph-16.2.15+ds/qa/suites/fs/bugs/client_trim_caps/overrides/ignorelist_health.yaml | 10 ceph-16.2.15+ds/qa/suites/fs/bugs/client_trim_caps/overrides/ignorelist_wrongly_marked_down.yaml | 6 ceph-16.2.15+ds/qa/suites/fs/full/conf/mds.yaml | 1 ceph-16.2.15+ds/qa/suites/fs/full/distro/rhel_8.yaml | 2 ceph-16.2.15+ds/qa/suites/fs/full/overrides/ignorelist_health.yaml | 10 ceph-16.2.15+ds/qa/suites/fs/full/overrides/ignorelist_wrongly_marked_down.yaml | 6 ceph-16.2.15+ds/qa/suites/fs/functional/conf/mds.yaml | 1 ceph-16.2.15+ds/qa/suites/fs/functional/distro/rhel_8.yaml | 2 ceph-16.2.15+ds/qa/suites/fs/functional/mount/kclient/overrides/distro/stock/rhel_8.yaml | 2 ceph-16.2.15+ds/qa/suites/fs/functional/overrides/ignorelist_health.yaml | 10 ceph-16.2.15+ds/qa/suites/fs/functional/overrides/ignorelist_wrongly_marked_down.yaml | 6 ceph-16.2.15+ds/qa/suites/fs/functional/tasks/alternate-pool.yaml | 1 ceph-16.2.15+ds/qa/suites/fs/functional/tasks/client-recovery.yaml | 3 ceph-16.2.15+ds/qa/suites/fs/functional/tasks/damage.yaml | 2 ceph-16.2.15+ds/qa/suites/fs/functional/tasks/forward-scrub.yaml | 1 ceph-16.2.15+ds/qa/suites/fs/functional/tasks/snap_schedule_snapdir.yaml | 30 ceph-16.2.15+ds/qa/suites/fs/libcephfs/conf/mds.yaml | 1 ceph-16.2.15+ds/qa/suites/fs/libcephfs/distro/rhel_8.yaml | 2 ceph-16.2.15+ds/qa/suites/fs/libcephfs/overrides/ignorelist_health.yaml | 10 ceph-16.2.15+ds/qa/suites/fs/libcephfs/overrides/ignorelist_wrongly_marked_down.yaml | 6 ceph-16.2.15+ds/qa/suites/fs/mirror-ha/cephfs-mirror/1-volume-create-rm.yaml | 14 ceph-16.2.15+ds/qa/suites/fs/mirror-ha/cephfs-mirror/2-three-per-cluster.yaml | 12 ceph-16.2.15+ds/qa/suites/fs/mirror-ha/cephfs-mirror/three-per-cluster.yaml | 12 ceph-16.2.15+ds/qa/suites/fs/mirror-ha/supported-random-distro$/rhel_8.yaml | 2 ceph-16.2.15+ds/qa/suites/fs/mirror-ha/workloads/cephfs-mirror-ha-workunit.yaml | 4 ceph-16.2.15+ds/qa/suites/fs/mirror/supported-random-distros$/rhel_8.yaml | 2 ceph-16.2.15+ds/qa/suites/fs/mixed-clients/conf/mds.yaml | 1 ceph-16.2.15+ds/qa/suites/fs/mixed-clients/kclient-overrides/distro/stock/rhel_8.yaml | 2 ceph-16.2.15+ds/qa/suites/fs/mixed-clients/overrides/ignorelist_health.yaml | 10 ceph-16.2.15+ds/qa/suites/fs/mixed-clients/overrides/ignorelist_wrongly_marked_down.yaml | 6 ceph-16.2.15+ds/qa/suites/fs/multiclient/conf/mds.yaml | 1 ceph-16.2.15+ds/qa/suites/fs/multiclient/overrides/ignorelist_health.yaml | 10 ceph-16.2.15+ds/qa/suites/fs/multiclient/overrides/ignorelist_wrongly_marked_down.yaml | 6 ceph-16.2.15+ds/qa/suites/fs/multiclient/tasks/cephfs_misc_tests.yaml | 1 ceph-16.2.15+ds/qa/suites/fs/multifs/conf/mds.yaml | 1 ceph-16.2.15+ds/qa/suites/fs/multifs/distro/rhel_8.yaml | 2 ceph-16.2.15+ds/qa/suites/fs/multifs/mount/kclient/overrides/distro/stock/rhel_8.yaml | 2 ceph-16.2.15+ds/qa/suites/fs/multifs/overrides/ignorelist_health.yaml | 10 ceph-16.2.15+ds/qa/suites/fs/multifs/overrides/ignorelist_wrongly_marked_down.yaml | 6 ceph-16.2.15+ds/qa/suites/fs/permission/conf/mds.yaml | 1 ceph-16.2.15+ds/qa/suites/fs/permission/distro/rhel_8.yaml | 2 ceph-16.2.15+ds/qa/suites/fs/permission/overrides/ignorelist_health.yaml | 10 ceph-16.2.15+ds/qa/suites/fs/permission/overrides/ignorelist_wrongly_marked_down.yaml | 6 ceph-16.2.15+ds/qa/suites/fs/shell/conf/mds.yaml | 1 ceph-16.2.15+ds/qa/suites/fs/shell/distro/rhel_8.yaml | 2 ceph-16.2.15+ds/qa/suites/fs/shell/overrides/ignorelist_health.yaml | 10 ceph-16.2.15+ds/qa/suites/fs/shell/overrides/ignorelist_wrongly_marked_down.yaml | 6 ceph-16.2.15+ds/qa/suites/fs/snaps/conf/mds.yaml | 1 ceph-16.2.15+ds/qa/suites/fs/snaps/distro/rhel_8.yaml | 2 ceph-16.2.15+ds/qa/suites/fs/snaps/mount/kclient/overrides/distro/stock/rhel_8.yaml | 2 ceph-16.2.15+ds/qa/suites/fs/snaps/overrides/ignorelist_health.yaml | 10 ceph-16.2.15+ds/qa/suites/fs/snaps/overrides/ignorelist_wrongly_marked_down.yaml | 6 ceph-16.2.15+ds/qa/suites/fs/thrash/multifs/conf/mds.yaml | 1 ceph-16.2.15+ds/qa/suites/fs/thrash/multifs/distro/rhel_8.yaml | 2 ceph-16.2.15+ds/qa/suites/fs/thrash/multifs/mount/kclient/overrides/distro/stock/rhel_8.yaml | 2 ceph-16.2.15+ds/qa/suites/fs/thrash/multifs/overrides/client-shutdown.yaml | 6 ceph-16.2.15+ds/qa/suites/fs/thrash/multifs/overrides/ignorelist_health.yaml | 10 ceph-16.2.15+ds/qa/suites/fs/thrash/multifs/overrides/ignorelist_wrongly_marked_down.yaml | 6 ceph-16.2.15+ds/qa/suites/fs/thrash/multifs/overrides/thrashosds-health.yaml | 12 ceph-16.2.15+ds/qa/suites/fs/thrash/workloads/conf/mds.yaml | 1 ceph-16.2.15+ds/qa/suites/fs/thrash/workloads/distro/rhel_8.yaml | 2 ceph-16.2.15+ds/qa/suites/fs/thrash/workloads/mount/kclient/overrides/distro/stock/rhel_8.yaml | 2 ceph-16.2.15+ds/qa/suites/fs/thrash/workloads/overrides/client-shutdown.yaml | 6 ceph-16.2.15+ds/qa/suites/fs/thrash/workloads/overrides/ignorelist_health.yaml | 10 ceph-16.2.15+ds/qa/suites/fs/thrash/workloads/overrides/ignorelist_wrongly_marked_down.yaml | 6 ceph-16.2.15+ds/qa/suites/fs/thrash/workloads/overrides/thrashosds-health.yaml | 12 ceph-16.2.15+ds/qa/suites/fs/thrash/workloads/tasks/1-thrash/osd.yaml | 1 ceph-16.2.15+ds/qa/suites/fs/top/cluster/1-node.yaml | 4 ceph-16.2.15+ds/qa/suites/fs/top/overrides/ignorelist_health.yaml | 10 ceph-16.2.15+ds/qa/suites/fs/top/supported-random-distros$/rhel_8.yaml | 2 ceph-16.2.15+ds/qa/suites/fs/traceless/conf/mds.yaml | 1 ceph-16.2.15+ds/qa/suites/fs/traceless/distro/rhel_8.yaml | 2 ceph-16.2.15+ds/qa/suites/fs/traceless/overrides/ignorelist_health.yaml | 10 ceph-16.2.15+ds/qa/suites/fs/traceless/overrides/ignorelist_wrongly_marked_down.yaml | 6 ceph-16.2.15+ds/qa/suites/fs/upgrade/featureful_client/old_client/conf/mds.yaml | 1 ceph-16.2.15+ds/qa/suites/fs/upgrade/featureful_client/old_client/overrides/ignorelist_health.yaml | 10 ceph-16.2.15+ds/qa/suites/fs/upgrade/featureful_client/old_client/overrides/ignorelist_wrongly_marked_down.yaml | 6 ceph-16.2.15+ds/qa/suites/fs/upgrade/featureful_client/upgraded_client/conf/mds.yaml | 1 ceph-16.2.15+ds/qa/suites/fs/upgrade/featureful_client/upgraded_client/overrides/ignorelist_health.yaml | 10 ceph-16.2.15+ds/qa/suites/fs/upgrade/featureful_client/upgraded_client/overrides/ignorelist_wrongly_marked_down.yaml | 6 ceph-16.2.15+ds/qa/suites/fs/upgrade/mds_upgrade_sequence/centos_8.stream_container_tools.yaml | 2 ceph-16.2.15+ds/qa/suites/fs/upgrade/mds_upgrade_sequence/conf/mds.yaml | 1 ceph-16.2.15+ds/qa/suites/fs/upgrade/mds_upgrade_sequence/overrides/ignorelist_health.yaml | 10 ceph-16.2.15+ds/qa/suites/fs/upgrade/mds_upgrade_sequence/overrides/ignorelist_wrongly_marked_down.yaml | 6 ceph-16.2.15+ds/qa/suites/fs/upgrade/nofs/conf/mds.yaml | 1 ceph-16.2.15+ds/qa/suites/fs/upgrade/nofs/overrides/ignorelist_health.yaml | 10 ceph-16.2.15+ds/qa/suites/fs/upgrade/nofs/overrides/ignorelist_wrongly_marked_down.yaml | 6 ceph-16.2.15+ds/qa/suites/fs/upgrade/upgraded_client/from_nautilus/conf/mds.yaml | 1 ceph-16.2.15+ds/qa/suites/fs/upgrade/upgraded_client/from_nautilus/overrides/ignorelist_health.yaml | 10 ceph-16.2.15+ds/qa/suites/fs/upgrade/upgraded_client/from_nautilus/overrides/ignorelist_wrongly_marked_down.yaml | 6 ceph-16.2.15+ds/qa/suites/fs/upgrade/volumes/import-legacy/conf/mds.yaml | 1 ceph-16.2.15+ds/qa/suites/fs/upgrade/volumes/import-legacy/overrides/ignorelist_health.yaml | 10 ceph-16.2.15+ds/qa/suites/fs/upgrade/volumes/import-legacy/overrides/ignorelist_wrongly_marked_down.yaml | 6 ceph-16.2.15+ds/qa/suites/fs/verify/conf/mds.yaml | 1 ceph-16.2.15+ds/qa/suites/fs/verify/distro/rhel_8.yaml | 2 ceph-16.2.15+ds/qa/suites/fs/verify/overrides/ignorelist_health.yaml | 10 ceph-16.2.15+ds/qa/suites/fs/verify/overrides/ignorelist_wrongly_marked_down.yaml | 6 ceph-16.2.15+ds/qa/suites/fs/volumes/conf/mds.yaml | 1 ceph-16.2.15+ds/qa/suites/fs/volumes/distro/rhel_8.yaml | 2 ceph-16.2.15+ds/qa/suites/fs/volumes/mount/kclient/overrides/distro/stock/rhel_8.yaml | 2 ceph-16.2.15+ds/qa/suites/fs/volumes/overrides/ignorelist_health.yaml | 10 ceph-16.2.15+ds/qa/suites/fs/volumes/overrides/ignorelist_wrongly_marked_down.yaml | 6 ceph-16.2.15+ds/qa/suites/fs/volumes/tasks/volumes/test/basic.yaml | 1 ceph-16.2.15+ds/qa/suites/fs/volumes/tasks/volumes/test/finisher_per_module.yaml | 13 ceph-16.2.15+ds/qa/suites/fs/workload/conf/mds.yaml | 1 ceph-16.2.15+ds/qa/suites/fs/workload/distro/rhel_8.yaml | 2 ceph-16.2.15+ds/qa/suites/fs/workload/mount/kclient/overrides/distro/stock/rhel_8.yaml | 2 ceph-16.2.15+ds/qa/suites/fs/workload/ms_mode/crc.yaml | 3 ceph-16.2.15+ds/qa/suites/fs/workload/ms_mode/legacy.yaml | 3 ceph-16.2.15+ds/qa/suites/fs/workload/ms_mode/secure.yaml | 3 ceph-16.2.15+ds/qa/suites/fs/workload/overrides/ignorelist_health.yaml | 10 ceph-16.2.15+ds/qa/suites/fs/workload/overrides/ignorelist_wrongly_marked_down.yaml | 6 ceph-16.2.15+ds/qa/suites/fs/workload/subvolume/with-namespace-isolated-and-quota.yaml | 11 ceph-16.2.15+ds/qa/suites/fs/workload/subvolume/with-namespace-isolated.yaml | 11 ceph-16.2.15+ds/qa/suites/fs/workload/subvolume/with-no-extra-options.yaml | 10 ceph-16.2.15+ds/qa/suites/fs/workload/subvolume/with-quota.yaml | 11 ceph-16.2.15+ds/qa/suites/krbd/singleton-msgr-failures/bluestore-bitmap.yaml | 43 ceph-16.2.15+ds/qa/suites/krbd/singleton-msgr-failures/conf.yaml | 7 ceph-16.2.15+ds/qa/suites/krbd/singleton-msgr-failures/ms_mode$/crc-rxbounce.yaml | 5 ceph-16.2.15+ds/qa/suites/krbd/singleton-msgr-failures/ms_mode$/crc.yaml | 5 ceph-16.2.15+ds/qa/suites/krbd/singleton-msgr-failures/ms_mode$/legacy-rxbounce.yaml | 5 ceph-16.2.15+ds/qa/suites/krbd/singleton-msgr-failures/ms_mode$/legacy.yaml | 5 ceph-16.2.15+ds/qa/suites/krbd/singleton-msgr-failures/ms_mode$/secure.yaml | 5 ceph-16.2.15+ds/qa/suites/krbd/singleton-msgr-failures/msgr-failures/few.yaml | 8 ceph-16.2.15+ds/qa/suites/krbd/singleton-msgr-failures/msgr-failures/many.yaml | 8 ceph-16.2.15+ds/qa/suites/krbd/singleton-msgr-failures/tasks/rbd_xfstests.yaml | 38 ceph-16.2.15+ds/qa/suites/krbd/singleton/conf.yaml | 1 ceph-16.2.15+ds/qa/suites/krbd/singleton/msgr-failures/few.yaml | 8 ceph-16.2.15+ds/qa/suites/krbd/singleton/msgr-failures/many.yaml | 8 ceph-16.2.15+ds/qa/suites/krbd/singleton/tasks/krbd_watch_errors.yaml | 19 ceph-16.2.15+ds/qa/suites/krbd/singleton/tasks/rbd_xfstests.yaml | 38 ceph-16.2.15+ds/qa/suites/krbd/thrash/thrashosds-health.yaml | 12 ceph-16.2.15+ds/qa/suites/krbd/thrash/workloads/krbd_diff_continuous.yaml | 12 ceph-16.2.15+ds/qa/suites/netsplit/supported-random-distro$/rhel_8.yaml | 2 ceph-16.2.15+ds/qa/suites/orch/cephadm/dashboard/0-distro/centos_8.stream_container_tools.yaml | 2 ceph-16.2.15+ds/qa/suites/orch/cephadm/dashboard/0-distro/ignorelist_health.yaml | 10 ceph-16.2.15+ds/qa/suites/orch/cephadm/dashboard/task/test_e2e.yaml | 25 ceph-16.2.15+ds/qa/suites/orch/cephadm/mds_upgrade_sequence/centos_8.stream_container_tools.yaml | 2 ceph-16.2.15+ds/qa/suites/orch/cephadm/mds_upgrade_sequence/conf/mds.yaml | 1 ceph-16.2.15+ds/qa/suites/orch/cephadm/mds_upgrade_sequence/overrides/ignorelist_health.yaml | 10 ceph-16.2.15+ds/qa/suites/orch/cephadm/mds_upgrade_sequence/overrides/ignorelist_wrongly_marked_down.yaml | 6 ceph-16.2.15+ds/qa/suites/orch/cephadm/mgr-nfs-upgrade/0-distro/centos_8.stream_container_tools.yaml | 2 ceph-16.2.15+ds/qa/suites/orch/cephadm/mgr-nfs-upgrade/1-start.yaml | 15 ceph-16.2.15+ds/qa/suites/orch/cephadm/orchestrator_cli/0-random-distro$/centos_8.stream_container_tools.yaml | 2 ceph-16.2.15+ds/qa/suites/orch/cephadm/orchestrator_cli/0-random-distro$/rhel_8.4_container_tools_3.0.yaml | 13 ceph-16.2.15+ds/qa/suites/orch/cephadm/orchestrator_cli/0-random-distro$/rhel_8.4_container_tools_rhel8.yaml | 13 ceph-16.2.15+ds/qa/suites/orch/cephadm/orchestrator_cli/0-random-distro$/rhel_8.6_container_tools_3.0.yaml | 13 ceph-16.2.15+ds/qa/suites/orch/cephadm/orchestrator_cli/0-random-distro$/rhel_8.6_container_tools_rhel8.yaml | 13 ceph-16.2.15+ds/qa/suites/orch/cephadm/osds/0-distro/centos_8.stream_container_tools.yaml | 2 ceph-16.2.15+ds/qa/suites/orch/cephadm/osds/0-distro/rhel_8.4_container_tools_3.0.yaml | 13 ceph-16.2.15+ds/qa/suites/orch/cephadm/osds/0-distro/rhel_8.4_container_tools_rhel8.yaml | 13 ceph-16.2.15+ds/qa/suites/orch/cephadm/osds/0-distro/rhel_8.6_container_tools_3.0.yaml | 13 ceph-16.2.15+ds/qa/suites/orch/cephadm/osds/0-distro/rhel_8.6_container_tools_rhel8.yaml | 13 ceph-16.2.15+ds/qa/suites/orch/cephadm/osds/2-ops/repave-all.yaml | 7 ceph-16.2.15+ds/qa/suites/orch/cephadm/osds/2-ops/rm-zap-add.yaml | 7 ceph-16.2.15+ds/qa/suites/orch/cephadm/osds/2-ops/rm-zap-flag.yaml | 7 ceph-16.2.15+ds/qa/suites/orch/cephadm/osds/2-ops/rm-zap-wait.yaml | 7 ceph-16.2.15+ds/qa/suites/orch/cephadm/osds/2-ops/rmdir-reactivate.yaml | 8 ceph-16.2.15+ds/qa/suites/orch/cephadm/rbd_iscsi/supported-random-distro$/rhel_8.yaml | 2 ceph-16.2.15+ds/qa/suites/orch/cephadm/smoke-roleless/0-distro/centos_8.stream_container_tools.yaml | 2 ceph-16.2.15+ds/qa/suites/orch/cephadm/smoke-roleless/0-distro/rhel_8.4_container_tools_3.0.yaml | 13 ceph-16.2.15+ds/qa/suites/orch/cephadm/smoke-roleless/0-distro/rhel_8.4_container_tools_rhel8.yaml | 13 ceph-16.2.15+ds/qa/suites/orch/cephadm/smoke-roleless/0-distro/rhel_8.6_container_tools_3.0.yaml | 13 ceph-16.2.15+ds/qa/suites/orch/cephadm/smoke-roleless/0-distro/rhel_8.6_container_tools_rhel8.yaml | 13 ceph-16.2.15+ds/qa/suites/orch/cephadm/smoke-singlehost/0-distro$/centos_8.stream_container_tools.yaml | 2 ceph-16.2.15+ds/qa/suites/orch/cephadm/smoke-singlehost/0-distro$/rhel_8.4_container_tools_3.0.yaml | 13 ceph-16.2.15+ds/qa/suites/orch/cephadm/smoke-singlehost/0-distro$/rhel_8.4_container_tools_rhel8.yaml | 13 ceph-16.2.15+ds/qa/suites/orch/cephadm/smoke-singlehost/0-distro$/rhel_8.6_container_tools_3.0.yaml | 13 ceph-16.2.15+ds/qa/suites/orch/cephadm/smoke-singlehost/0-distro$/rhel_8.6_container_tools_rhel8.yaml | 13 ceph-16.2.15+ds/qa/suites/orch/cephadm/smoke/distro/centos_8.stream_container_tools.yaml | 2 ceph-16.2.15+ds/qa/suites/orch/cephadm/smoke/distro/rhel_8.4_container_tools_3.0.yaml | 13 ceph-16.2.15+ds/qa/suites/orch/cephadm/smoke/distro/rhel_8.4_container_tools_rhel8.yaml | 13 ceph-16.2.15+ds/qa/suites/orch/cephadm/smoke/distro/rhel_8.6_container_tools_3.0.yaml | 13 ceph-16.2.15+ds/qa/suites/orch/cephadm/smoke/distro/rhel_8.6_container_tools_rhel8.yaml | 13 ceph-16.2.15+ds/qa/suites/orch/cephadm/smoke/start.yaml | 8 ceph-16.2.15+ds/qa/suites/orch/cephadm/thrash-old-clients/thrashosds-health.yaml | 12 ceph-16.2.15+ds/qa/suites/orch/cephadm/thrash/0-distro/centos_8.stream_container_tools.yaml | 2 ceph-16.2.15+ds/qa/suites/orch/cephadm/thrash/2-thrash.yaml | 17 ceph-16.2.15+ds/qa/suites/orch/cephadm/upgrade/4-wait.yaml | 11 ceph-16.2.15+ds/qa/suites/orch/cephadm/with-work/0-distro/centos_8.stream_container_tools.yaml | 2 ceph-16.2.15+ds/qa/suites/orch/cephadm/with-work/0-distro/rhel_8.4_container_tools_3.0.yaml | 13 ceph-16.2.15+ds/qa/suites/orch/cephadm/with-work/0-distro/rhel_8.4_container_tools_rhel8.yaml | 13 ceph-16.2.15+ds/qa/suites/orch/cephadm/with-work/0-distro/rhel_8.6_container_tools_3.0.yaml | 13 ceph-16.2.15+ds/qa/suites/orch/cephadm/with-work/0-distro/rhel_8.6_container_tools_rhel8.yaml | 13 ceph-16.2.15+ds/qa/suites/orch/cephadm/with-work/tasks/rados_api_tests.yaml | 9 ceph-16.2.15+ds/qa/suites/orch/cephadm/with-work/tasks/rados_python.yaml | 7 ceph-16.2.15+ds/qa/suites/orch/cephadm/workunits/0-distro/centos_8.stream_container_tools.yaml | 2 ceph-16.2.15+ds/qa/suites/orch/cephadm/workunits/task/test_iscsi_container/centos_8.stream_container_tools.yaml | 14 ceph-16.2.15+ds/qa/suites/orch/cephadm/workunits/task/test_iscsi_container/test_iscsi_container.yaml | 21 ceph-16.2.15+ds/qa/suites/orch/cephadm/workunits/task/test_nfs.yaml | 6 ceph-16.2.15+ds/qa/suites/orch/cephadm/workunits/task/test_orch_cli.yaml | 7 ceph-16.2.15+ds/qa/suites/orch/cephadm/workunits/task/test_orch_cli_mon.yaml | 7 ceph-16.2.15+ds/qa/suites/powercycle/osd/supported-all-distro/rhel_8.yaml | 2 ceph-16.2.15+ds/qa/suites/powercycle/osd/thrashosds-health.yaml | 12 ceph-16.2.15+ds/qa/suites/rados/basic/supported-random-distro$/rhel_8.yaml | 2 ceph-16.2.15+ds/qa/suites/rados/basic/tasks/rados_api_tests.yaml | 9 ceph-16.2.15+ds/qa/suites/rados/basic/tasks/rados_cls_all.yaml | 1 ceph-16.2.15+ds/qa/suites/rados/basic/tasks/rados_python.yaml | 7 ceph-16.2.15+ds/qa/suites/rados/cephadm/dashboard/0-distro/centos_8.stream_container_tools.yaml | 2 ceph-16.2.15+ds/qa/suites/rados/cephadm/dashboard/0-distro/ignorelist_health.yaml | 10 ceph-16.2.15+ds/qa/suites/rados/cephadm/dashboard/task/test_e2e.yaml | 25 ceph-16.2.15+ds/qa/suites/rados/cephadm/mds_upgrade_sequence/centos_8.stream_container_tools.yaml | 2 ceph-16.2.15+ds/qa/suites/rados/cephadm/mds_upgrade_sequence/conf/mds.yaml | 1 ceph-16.2.15+ds/qa/suites/rados/cephadm/mds_upgrade_sequence/overrides/ignorelist_health.yaml | 10 ceph-16.2.15+ds/qa/suites/rados/cephadm/mds_upgrade_sequence/overrides/ignorelist_wrongly_marked_down.yaml | 6 ceph-16.2.15+ds/qa/suites/rados/cephadm/mgr-nfs-upgrade/0-distro/centos_8.stream_container_tools.yaml | 2 ceph-16.2.15+ds/qa/suites/rados/cephadm/mgr-nfs-upgrade/1-start.yaml | 15 ceph-16.2.15+ds/qa/suites/rados/cephadm/orchestrator_cli/0-random-distro$/centos_8.stream_container_tools.yaml | 2 ceph-16.2.15+ds/qa/suites/rados/cephadm/orchestrator_cli/0-random-distro$/rhel_8.4_container_tools_3.0.yaml | 13 ceph-16.2.15+ds/qa/suites/rados/cephadm/orchestrator_cli/0-random-distro$/rhel_8.4_container_tools_rhel8.yaml | 13 ceph-16.2.15+ds/qa/suites/rados/cephadm/orchestrator_cli/0-random-distro$/rhel_8.6_container_tools_3.0.yaml | 13 ceph-16.2.15+ds/qa/suites/rados/cephadm/orchestrator_cli/0-random-distro$/rhel_8.6_container_tools_rhel8.yaml | 13 ceph-16.2.15+ds/qa/suites/rados/cephadm/osds/0-distro/centos_8.stream_container_tools.yaml | 2 ceph-16.2.15+ds/qa/suites/rados/cephadm/osds/0-distro/rhel_8.4_container_tools_3.0.yaml | 13 ceph-16.2.15+ds/qa/suites/rados/cephadm/osds/0-distro/rhel_8.4_container_tools_rhel8.yaml | 13 ceph-16.2.15+ds/qa/suites/rados/cephadm/osds/0-distro/rhel_8.6_container_tools_3.0.yaml | 13 ceph-16.2.15+ds/qa/suites/rados/cephadm/osds/0-distro/rhel_8.6_container_tools_rhel8.yaml | 13 ceph-16.2.15+ds/qa/suites/rados/cephadm/osds/2-ops/repave-all.yaml | 7 ceph-16.2.15+ds/qa/suites/rados/cephadm/osds/2-ops/rm-zap-add.yaml | 7 ceph-16.2.15+ds/qa/suites/rados/cephadm/osds/2-ops/rm-zap-flag.yaml | 7 ceph-16.2.15+ds/qa/suites/rados/cephadm/osds/2-ops/rm-zap-wait.yaml | 7 ceph-16.2.15+ds/qa/suites/rados/cephadm/osds/2-ops/rmdir-reactivate.yaml | 8 ceph-16.2.15+ds/qa/suites/rados/cephadm/rbd_iscsi/supported-random-distro$/rhel_8.yaml | 2 ceph-16.2.15+ds/qa/suites/rados/cephadm/smoke-roleless/0-distro/centos_8.stream_container_tools.yaml | 2 ceph-16.2.15+ds/qa/suites/rados/cephadm/smoke-roleless/0-distro/rhel_8.4_container_tools_3.0.yaml | 13 ceph-16.2.15+ds/qa/suites/rados/cephadm/smoke-roleless/0-distro/rhel_8.4_container_tools_rhel8.yaml | 13 ceph-16.2.15+ds/qa/suites/rados/cephadm/smoke-roleless/0-distro/rhel_8.6_container_tools_3.0.yaml | 13 ceph-16.2.15+ds/qa/suites/rados/cephadm/smoke-roleless/0-distro/rhel_8.6_container_tools_rhel8.yaml | 13 ceph-16.2.15+ds/qa/suites/rados/cephadm/smoke-singlehost/0-distro$/centos_8.stream_container_tools.yaml | 2 ceph-16.2.15+ds/qa/suites/rados/cephadm/smoke-singlehost/0-distro$/rhel_8.4_container_tools_3.0.yaml | 13 ceph-16.2.15+ds/qa/suites/rados/cephadm/smoke-singlehost/0-distro$/rhel_8.4_container_tools_rhel8.yaml | 13 ceph-16.2.15+ds/qa/suites/rados/cephadm/smoke-singlehost/0-distro$/rhel_8.6_container_tools_3.0.yaml | 13 ceph-16.2.15+ds/qa/suites/rados/cephadm/smoke-singlehost/0-distro$/rhel_8.6_container_tools_rhel8.yaml | 13 ceph-16.2.15+ds/qa/suites/rados/cephadm/smoke/distro/centos_8.stream_container_tools.yaml | 2 ceph-16.2.15+ds/qa/suites/rados/cephadm/smoke/distro/rhel_8.4_container_tools_3.0.yaml | 13 ceph-16.2.15+ds/qa/suites/rados/cephadm/smoke/distro/rhel_8.4_container_tools_rhel8.yaml | 13 ceph-16.2.15+ds/qa/suites/rados/cephadm/smoke/distro/rhel_8.6_container_tools_3.0.yaml | 13 ceph-16.2.15+ds/qa/suites/rados/cephadm/smoke/distro/rhel_8.6_container_tools_rhel8.yaml | 13 ceph-16.2.15+ds/qa/suites/rados/cephadm/smoke/start.yaml | 8 ceph-16.2.15+ds/qa/suites/rados/cephadm/thrash-old-clients/thrashosds-health.yaml | 12 ceph-16.2.15+ds/qa/suites/rados/cephadm/thrash/0-distro/centos_8.stream_container_tools.yaml | 2 ceph-16.2.15+ds/qa/suites/rados/cephadm/thrash/2-thrash.yaml | 17 ceph-16.2.15+ds/qa/suites/rados/cephadm/upgrade/4-wait.yaml | 11 ceph-16.2.15+ds/qa/suites/rados/cephadm/with-work/0-distro/centos_8.stream_container_tools.yaml | 2 ceph-16.2.15+ds/qa/suites/rados/cephadm/with-work/0-distro/rhel_8.4_container_tools_3.0.yaml | 13 ceph-16.2.15+ds/qa/suites/rados/cephadm/with-work/0-distro/rhel_8.4_container_tools_rhel8.yaml | 13 ceph-16.2.15+ds/qa/suites/rados/cephadm/with-work/0-distro/rhel_8.6_container_tools_3.0.yaml | 13 ceph-16.2.15+ds/qa/suites/rados/cephadm/with-work/0-distro/rhel_8.6_container_tools_rhel8.yaml | 13 ceph-16.2.15+ds/qa/suites/rados/cephadm/with-work/tasks/rados_api_tests.yaml | 9 ceph-16.2.15+ds/qa/suites/rados/cephadm/with-work/tasks/rados_python.yaml | 7 ceph-16.2.15+ds/qa/suites/rados/cephadm/workunits/0-distro/centos_8.stream_container_tools.yaml | 2 ceph-16.2.15+ds/qa/suites/rados/cephadm/workunits/task/test_iscsi_container/centos_8.stream_container_tools.yaml | 14 ceph-16.2.15+ds/qa/suites/rados/cephadm/workunits/task/test_iscsi_container/test_iscsi_container.yaml | 21 ceph-16.2.15+ds/qa/suites/rados/cephadm/workunits/task/test_nfs.yaml | 6 ceph-16.2.15+ds/qa/suites/rados/cephadm/workunits/task/test_orch_cli.yaml | 7 ceph-16.2.15+ds/qa/suites/rados/cephadm/workunits/task/test_orch_cli_mon.yaml | 7 ceph-16.2.15+ds/qa/suites/rados/dashboard/centos_8.stream_container_tools.yaml | 2 ceph-16.2.15+ds/qa/suites/rados/mgr/supported-random-distro$/rhel_8.yaml | 2 ceph-16.2.15+ds/qa/suites/rados/mgr/tasks/per_module_finisher_stats.yaml | 43 ceph-16.2.15+ds/qa/suites/rados/mgr/tasks/workunits.yaml | 2 ceph-16.2.15+ds/qa/suites/rados/monthrash/supported-random-distro$/rhel_8.yaml | 2 ceph-16.2.15+ds/qa/suites/rados/multimon/supported-random-distro$/rhel_8.yaml | 2 ceph-16.2.15+ds/qa/suites/rados/objectstore/supported-random-distro$/rhel_8.yaml | 2 ceph-16.2.15+ds/qa/suites/rados/rest/supported-random-distro$/rhel_8.yaml | 2 ceph-16.2.15+ds/qa/suites/rados/rook/smoke/0-distro/ubuntu_18.04.yaml | 2 ceph-16.2.15+ds/qa/suites/rados/rook/smoke/0-distro/ubuntu_20.04.yaml | 2 ceph-16.2.15+ds/qa/suites/rados/rook/smoke/0-kubeadm.yaml | 2 ceph-16.2.15+ds/qa/suites/rados/rook/smoke/1-rook.yaml | 7 ceph-16.2.15+ds/qa/suites/rados/rook/smoke/2-workload/radosbench.yaml | 5 ceph-16.2.15+ds/qa/suites/rados/rook/smoke/3-final.yaml | 8 ceph-16.2.15+ds/qa/suites/rados/rook/smoke/cluster/1-node.yaml | 3 ceph-16.2.15+ds/qa/suites/rados/rook/smoke/cluster/3-node.yaml | 7 ceph-16.2.15+ds/qa/suites/rados/rook/smoke/k8s/1.21.yaml | 3 ceph-16.2.15+ds/qa/suites/rados/rook/smoke/net/calico.yaml | 3 ceph-16.2.15+ds/qa/suites/rados/rook/smoke/rook/1.6.2.yaml | 4 ceph-16.2.15+ds/qa/suites/rados/rook/smoke/rook/master.yaml | 3 ceph-16.2.15+ds/qa/suites/rados/singleton-bluestore/supported-random-distro$/rhel_8.yaml | 2 ceph-16.2.15+ds/qa/suites/rados/singleton-nomsgr/supported-random-distro$/rhel_8.yaml | 2 ceph-16.2.15+ds/qa/suites/rados/singleton/all/thrash-backfill-full.yaml | 2 ceph-16.2.15+ds/qa/suites/rados/singleton/all/thrash-eio.yaml | 4 ceph-16.2.15+ds/qa/suites/rados/singleton/all/thrash-rados/thrashosds-health.yaml | 12 ceph-16.2.15+ds/qa/suites/rados/singleton/supported-random-distro$/rhel_8.yaml | 2 ceph-16.2.15+ds/qa/suites/rados/standalone/supported-random-distro$/rhel_8.yaml | 2 ceph-16.2.15+ds/qa/suites/rados/standalone/workloads/mon-stretch.yaml | 18 ceph-16.2.15+ds/qa/suites/rados/thrash-erasure-code-big/supported-random-distro$/rhel_8.yaml | 2 ceph-16.2.15+ds/qa/suites/rados/thrash-erasure-code-big/thrashosds-health.yaml | 12 ceph-16.2.15+ds/qa/suites/rados/thrash-erasure-code-isa/supported-random-distro$/rhel_8.yaml | 2 ceph-16.2.15+ds/qa/suites/rados/thrash-erasure-code-isa/thrashosds-health.yaml | 12 ceph-16.2.15+ds/qa/suites/rados/thrash-erasure-code-overwrites/supported-random-distro$/rhel_8.yaml | 2 ceph-16.2.15+ds/qa/suites/rados/thrash-erasure-code-overwrites/thrashosds-health.yaml | 12 ceph-16.2.15+ds/qa/suites/rados/thrash-erasure-code-shec/supported-random-distro$/rhel_8.yaml | 2 ceph-16.2.15+ds/qa/suites/rados/thrash-erasure-code-shec/thrashosds-health.yaml | 12 ceph-16.2.15+ds/qa/suites/rados/thrash-erasure-code/supported-random-distro$/rhel_8.yaml | 2 ceph-16.2.15+ds/qa/suites/rados/thrash-erasure-code/thrashosds-health.yaml | 12 ceph-16.2.15+ds/qa/suites/rados/thrash/supported-random-distro$/rhel_8.yaml | 2 ceph-16.2.15+ds/qa/suites/rados/thrash/thrashosds-health.yaml | 12 ceph-16.2.15+ds/qa/suites/rados/upgrade/nautilus-x-singleton/thrashosds-health.yaml | 12 ceph-16.2.15+ds/qa/suites/rados/verify/d-thrash/default/thrashosds-health.yaml | 12 ceph-16.2.15+ds/qa/suites/rados/verify/tasks/rados_api_tests.yaml | 4 ceph-16.2.15+ds/qa/suites/rados/verify/tasks/rados_cls_all.yaml | 2 ceph-16.2.15+ds/qa/suites/rbd/basic/supported-random-distro$/rhel_8.yaml | 2 ceph-16.2.15+ds/qa/suites/rbd/cli/supported-random-distro$/rhel_8.yaml | 2 ceph-16.2.15+ds/qa/suites/rbd/cli/workloads/rbd_support_module_recovery.yaml | 13 ceph-16.2.15+ds/qa/suites/rbd/cli_v1/supported-random-distro$/rhel_8.yaml | 2 ceph-16.2.15+ds/qa/suites/rbd/encryption/supported-random-distro$/rhel_8.yaml | 2 ceph-16.2.15+ds/qa/suites/rbd/immutable-object-cache/supported-random-distro$/rhel_8.yaml | 2 ceph-16.2.15+ds/qa/suites/rbd/iscsi/supported-random-distro$/rhel_8.yaml | 2 ceph-16.2.15+ds/qa/suites/rbd/librbd/supported-random-distro$/rhel_8.yaml | 2 ceph-16.2.15+ds/qa/suites/rbd/maintenance/supported-random-distro$/rhel_8.yaml | 2 ceph-16.2.15+ds/qa/suites/rbd/migration/4-supported-random-distro$/rhel_8.yaml | 2 ceph-16.2.15+ds/qa/suites/rbd/mirror-thrash/supported-random-distro$/rhel_8.yaml | 2 ceph-16.2.15+ds/qa/suites/rbd/mirror/supported-random-distro$/rhel_8.yaml | 2 ceph-16.2.15+ds/qa/suites/rbd/nbd/supported-random-distro$/rhel_8.yaml | 2 ceph-16.2.15+ds/qa/suites/rbd/nbd/thrashosds-health.yaml | 12 ceph-16.2.15+ds/qa/suites/rbd/nbd/workloads/rbd_nbd_diff_continuous.yaml | 14 ceph-16.2.15+ds/qa/suites/rbd/pwl-cache/home/3-supported-random-distro$/rhel_8.yaml | 2 ceph-16.2.15+ds/qa/suites/rbd/pwl-cache/tmpfs/3-supported-random-distro$/rhel_8.yaml | 2 ceph-16.2.15+ds/qa/suites/rbd/qemu/supported-random-distro$/rhel_8.yaml | 2 ceph-16.2.15+ds/qa/suites/rbd/singleton-bluestore/supported-random-distro$/rhel_8.yaml | 2 ceph-16.2.15+ds/qa/suites/rbd/singleton/all/qemu-iotests-no-cache.yaml | 1 ceph-16.2.15+ds/qa/suites/rbd/singleton/all/qemu-iotests-writearound.yaml | 1 ceph-16.2.15+ds/qa/suites/rbd/singleton/all/qemu-iotests-writeback.yaml | 1 ceph-16.2.15+ds/qa/suites/rbd/singleton/all/qemu-iotests-writethrough.yaml | 1 ceph-16.2.15+ds/qa/suites/rbd/singleton/supported-random-distro$/rhel_8.yaml | 2 ceph-16.2.15+ds/qa/suites/rbd/thrash/supported-random-distro$/rhel_8.yaml | 2 ceph-16.2.15+ds/qa/suites/rbd/thrash/thrashosds-health.yaml | 12 ceph-16.2.15+ds/qa/suites/rgw/crypt/2-kms/barbican.yaml | 6 ceph-16.2.15+ds/qa/suites/rgw/crypt/ignore-pg-availability.yaml | 2 ceph-16.2.15+ds/qa/suites/rgw/multifs/ignore-pg-availability.yaml | 2 ceph-16.2.15+ds/qa/suites/rgw/multifs/tasks/rgw_bucket_quota.yaml | 2 ceph-16.2.15+ds/qa/suites/rgw/multifs/tasks/rgw_multipart_upload.yaml | 2 ceph-16.2.15+ds/qa/suites/rgw/multifs/tasks/rgw_user_quota.yaml | 2 ceph-16.2.15+ds/qa/suites/rgw/multifs/ubuntu_latest.yaml | 2 ceph-16.2.15+ds/qa/suites/rgw/multisite/ignore-pg-availability.yaml | 2 ceph-16.2.15+ds/qa/suites/rgw/multisite/realms/three-zone-plus-pubsub.yaml | 23 ceph-16.2.15+ds/qa/suites/rgw/multisite/realms/three-zones.yaml | 22 ceph-16.2.15+ds/qa/suites/rgw/multisite/supported-random-distro$/centos_8.yaml | 6 ceph-16.2.15+ds/qa/suites/rgw/multisite/supported-random-distro$/rhel_8.yaml | 6 ceph-16.2.15+ds/qa/suites/rgw/multisite/supported-random-distro$/ubuntu_latest.yaml | 2 ceph-16.2.15+ds/qa/suites/rgw/multisite/tasks/test_multi.yaml | 9 ceph-16.2.15+ds/qa/suites/rgw/multisite/valgrind.yaml | 20 ceph-16.2.15+ds/qa/suites/rgw/multisite/valgrind.yaml.disabled | 20 ceph-16.2.15+ds/qa/suites/rgw/singleton/supported-random-distro$/rhel_8.yaml | 2 ceph-16.2.15+ds/qa/suites/rgw/sts/ignore-pg-availability.yaml | 2 ceph-16.2.15+ds/qa/suites/rgw/sts/supported-random-distro$/rhel_8.yaml | 2 ceph-16.2.15+ds/qa/suites/rgw/tempest/tasks/rgw_tempest.yaml | 13 ceph-16.2.15+ds/qa/suites/rgw/thrash/thrashosds-health.yaml | 12 ceph-16.2.15+ds/qa/suites/rgw/thrash/ubuntu_latest.yaml | 2 ceph-16.2.15+ds/qa/suites/rgw/thrash/workload/rgw_bucket_quota.yaml | 2 ceph-16.2.15+ds/qa/suites/rgw/thrash/workload/rgw_multipart_upload.yaml | 2 ceph-16.2.15+ds/qa/suites/rgw/thrash/workload/rgw_user_quota.yaml | 2 ceph-16.2.15+ds/qa/suites/rgw/verify/ignore-pg-availability.yaml | 2 ceph-16.2.15+ds/qa/suites/rgw/verify/tasks/bucket-check.yaml | 5 ceph-16.2.15+ds/qa/suites/rgw/verify/tasks/versioning.yaml | 5 ceph-16.2.15+ds/qa/suites/smoke/basic/supported-random-distro$/rhel_8.yaml | 2 ceph-16.2.15+ds/qa/suites/teuthology/buildpackages/supported-all-distro/rhel_8.yaml | 2 ceph-16.2.15+ds/qa/suites/teuthology/ceph/distros/rhel_latest.yaml | 2 ceph-16.2.15+ds/qa/suites/teuthology/rgw/distros/rhel_latest.yaml | 2 ceph-16.2.15+ds/qa/suites/upgrade-clients/client-upgrade-pacific-reef/pacific-client-x/rbd/0-cluster/openstack.yaml | 4 ceph-16.2.15+ds/qa/suites/upgrade-clients/client-upgrade-pacific-reef/pacific-client-x/rbd/0-cluster/start.yaml | 19 ceph-16.2.15+ds/qa/suites/upgrade-clients/client-upgrade-pacific-reef/pacific-client-x/rbd/1-install/pacific-client-x.yaml | 11 ceph-16.2.15+ds/qa/suites/upgrade-clients/client-upgrade-pacific-reef/pacific-client-x/rbd/2-workload/rbd_notification_tests.yaml | 34 ceph-16.2.15+ds/qa/suites/upgrade-clients/client-upgrade-pacific-reef/pacific-client-x/rbd/supported/ubuntu_20.04.yaml | 2 ceph-16.2.15+ds/qa/suites/upgrade/cephfs/featureful_client/old_client/conf/mds.yaml | 1 ceph-16.2.15+ds/qa/suites/upgrade/cephfs/featureful_client/old_client/overrides/ignorelist_health.yaml | 10 ceph-16.2.15+ds/qa/suites/upgrade/cephfs/featureful_client/old_client/overrides/ignorelist_wrongly_marked_down.yaml | 6 ceph-16.2.15+ds/qa/suites/upgrade/cephfs/featureful_client/upgraded_client/conf/mds.yaml | 1 ceph-16.2.15+ds/qa/suites/upgrade/cephfs/featureful_client/upgraded_client/overrides/ignorelist_health.yaml | 10 ceph-16.2.15+ds/qa/suites/upgrade/cephfs/featureful_client/upgraded_client/overrides/ignorelist_wrongly_marked_down.yaml | 6 ceph-16.2.15+ds/qa/suites/upgrade/cephfs/mds_upgrade_sequence/centos_8.stream_container_tools.yaml | 2 ceph-16.2.15+ds/qa/suites/upgrade/cephfs/mds_upgrade_sequence/conf/mds.yaml | 1 ceph-16.2.15+ds/qa/suites/upgrade/cephfs/mds_upgrade_sequence/overrides/ignorelist_health.yaml | 10 ceph-16.2.15+ds/qa/suites/upgrade/cephfs/mds_upgrade_sequence/overrides/ignorelist_wrongly_marked_down.yaml | 6 ceph-16.2.15+ds/qa/suites/upgrade/cephfs/nofs/conf/mds.yaml | 1 ceph-16.2.15+ds/qa/suites/upgrade/cephfs/nofs/overrides/ignorelist_health.yaml | 10 ceph-16.2.15+ds/qa/suites/upgrade/cephfs/nofs/overrides/ignorelist_wrongly_marked_down.yaml | 6 ceph-16.2.15+ds/qa/suites/upgrade/cephfs/upgraded_client/from_nautilus/conf/mds.yaml | 1 ceph-16.2.15+ds/qa/suites/upgrade/cephfs/upgraded_client/from_nautilus/overrides/ignorelist_health.yaml | 10 ceph-16.2.15+ds/qa/suites/upgrade/cephfs/upgraded_client/from_nautilus/overrides/ignorelist_wrongly_marked_down.yaml | 6 ceph-16.2.15+ds/qa/suites/upgrade/cephfs/volumes/import-legacy/conf/mds.yaml | 1 ceph-16.2.15+ds/qa/suites/upgrade/cephfs/volumes/import-legacy/overrides/ignorelist_health.yaml | 10 ceph-16.2.15+ds/qa/suites/upgrade/cephfs/volumes/import-legacy/overrides/ignorelist_wrongly_marked_down.yaml | 6 ceph-16.2.15+ds/qa/suites/upgrade/nautilus-x-singleton/thrashosds-health.yaml | 12 ceph-16.2.15+ds/qa/suites/upgrade/nautilus-x/parallel/2-workload/rgw_ragweed_prepare.yaml | 2 ceph-16.2.15+ds/qa/suites/upgrade/nautilus-x/parallel/5-final-workload/rgw_ragweed_check.yaml | 2 ceph-16.2.15+ds/qa/suites/upgrade/nautilus-x/stress-split-erasure-code/thrashosds-health.yaml | 12 ceph-16.2.15+ds/qa/suites/upgrade/nautilus-x/stress-split/thrashosds-health.yaml | 12 ceph-16.2.15+ds/qa/suites/upgrade/octopus-x/parallel-no-cephadm/5-final-workload/rgw_ragweed_check.yaml | 2 ceph-16.2.15+ds/qa/suites/upgrade/octopus-x/parallel/0-distro/centos_8.stream_container_tools.yaml | 2 ceph-16.2.15+ds/qa/suites/upgrade/octopus-x/parallel/0-distro/rhel_8.4_container_tools_3.0.yaml | 13 ceph-16.2.15+ds/qa/suites/upgrade/octopus-x/parallel/0-distro/rhel_8.4_container_tools_rhel8.yaml | 13 ceph-16.2.15+ds/qa/suites/upgrade/octopus-x/parallel/0-distro/rhel_8.6_container_tools_3.0.yaml | 13 ceph-16.2.15+ds/qa/suites/upgrade/octopus-x/parallel/0-distro/rhel_8.6_container_tools_rhel8.yaml | 13 ceph-16.2.15+ds/qa/suites/upgrade/octopus-x/parallel/workload/rados_api.yaml | 2 ceph-16.2.15+ds/qa/suites/upgrade/octopus-x/stress-split-erasure-code-no-cephadm/thrashosds-health.yaml | 12 ceph-16.2.15+ds/qa/suites/upgrade/octopus-x/stress-split-no-cephadm/4-workload/rbd-cls.yaml | 2 ceph-16.2.15+ds/qa/suites/upgrade/octopus-x/stress-split-no-cephadm/thrashosds-health.yaml | 12 ceph-16.2.15+ds/qa/suites/upgrade/octopus-x/stress-split/0-distro/centos_8.stream_container_tools.yaml | 2 ceph-16.2.15+ds/qa/suites/upgrade/octopus-x/stress-split/0-distro/rhel_8.4_container_tools_3.0.yaml | 13 ceph-16.2.15+ds/qa/suites/upgrade/octopus-x/stress-split/0-distro/rhel_8.4_container_tools_rhel8.yaml | 13 ceph-16.2.15+ds/qa/suites/upgrade/octopus-x/stress-split/0-distro/rhel_8.6_container_tools_3.0.yaml | 13 ceph-16.2.15+ds/qa/suites/upgrade/octopus-x/stress-split/0-distro/rhel_8.6_container_tools_rhel8.yaml | 13 ceph-16.2.15+ds/qa/suites/upgrade/octopus-x/stress-split/2-first-half-tasks/rbd-cls.yaml | 2 ceph-16.2.15+ds/qa/suites/upgrade/octopus-x/stress-split/3-stress-tasks/rbd-cls.yaml | 2 ceph-16.2.15+ds/qa/suites/upgrade/pacific-p2p/pacific-p2p-parallel/point-to-point-upgrade.yaml | 2 ceph-16.2.15+ds/qa/suites/upgrade/pacific-p2p/pacific-p2p-stress-split/4-workload/rbd-cls.yaml | 2 ceph-16.2.15+ds/qa/suites/upgrade/pacific-p2p/pacific-p2p-stress-split/6-final-workload/rbd-python.yaml | 2 ceph-16.2.15+ds/qa/tasks/barbican.py | 15 ceph-16.2.15+ds/qa/tasks/ceph.py | 14 ceph-16.2.15+ds/qa/tasks/ceph_deploy.py | 6 ceph-16.2.15+ds/qa/tasks/ceph_fuse.py | 16 ceph-16.2.15+ds/qa/tasks/ceph_manager.py | 53 ceph-16.2.15+ds/qa/tasks/ceph_test_case.py | 36 ceph-16.2.15+ds/qa/tasks/cephadm.conf | 2 ceph-16.2.15+ds/qa/tasks/cephadm.py | 20 ceph-16.2.15+ds/qa/tasks/cephfs/cephfs_test_case.py | 19 ceph-16.2.15+ds/qa/tasks/cephfs/filesystem.py | 101 ceph-16.2.15+ds/qa/tasks/cephfs/fuse_mount.py | 11 ceph-16.2.15+ds/qa/tasks/cephfs/kernel_mount.py | 13 ceph-16.2.15+ds/qa/tasks/cephfs/mount.py | 134 ceph-16.2.15+ds/qa/tasks/cephfs/test_cephfs_shell.py | 6 ceph-16.2.15+ds/qa/tasks/cephfs/test_client_limits.py | 99 ceph-16.2.15+ds/qa/tasks/cephfs/test_client_recovery.py | 137 ceph-16.2.15+ds/qa/tasks/cephfs/test_damage.py | 99 ceph-16.2.15+ds/qa/tasks/cephfs/test_data_scan.py | 85 ceph-16.2.15+ds/qa/tasks/cephfs/test_exports.py | 63 ceph-16.2.15+ds/qa/tasks/cephfs/test_failover.py | 81 ceph-16.2.15+ds/qa/tasks/cephfs/test_forward_scrub.py | 13 ceph-16.2.15+ds/qa/tasks/cephfs/test_fragment.py | 40 ceph-16.2.15+ds/qa/tasks/cephfs/test_fstop.py | 101 ceph-16.2.15+ds/qa/tasks/cephfs/test_mirroring.py | 35 ceph-16.2.15+ds/qa/tasks/cephfs/test_misc.py | 43 ceph-16.2.15+ds/qa/tasks/cephfs/test_nfs.py | 184 ceph-16.2.15+ds/qa/tasks/cephfs/test_recovery_pool.py | 130 ceph-16.2.15+ds/qa/tasks/cephfs/test_scrub.py | 9 ceph-16.2.15+ds/qa/tasks/cephfs/test_scrub_checks.py | 51 ceph-16.2.15+ds/qa/tasks/cephfs/test_snap_schedules.py | 126 ceph-16.2.15+ds/qa/tasks/cephfs/test_snapshots.py | 87 ceph-16.2.15+ds/qa/tasks/cephfs/test_subvolume.py | 170 ceph-16.2.15+ds/qa/tasks/cephfs/test_volumes.py | 62 ceph-16.2.15+ds/qa/tasks/check_counter.py | 60 ceph-16.2.15+ds/qa/tasks/mgr/dashboard/test_rbd.py | 32 ceph-16.2.15+ds/qa/tasks/mgr/mgr_test_case.py | 6 ceph-16.2.15+ds/qa/tasks/mon_thrash.py | 40 ceph-16.2.15+ds/qa/tasks/qemu.py | 15 ceph-16.2.15+ds/qa/tasks/rgw_multi/tests_ps.py | 4958 --- ceph-16.2.15+ds/qa/tasks/rgw_multi/zone_ps.py | 428 ceph-16.2.15+ds/qa/tasks/rgw_multisite.py | 10 ceph-16.2.15+ds/qa/tasks/rgw_multisite_tests.py | 5 ceph-16.2.15+ds/qa/tasks/rook.py | 6 ceph-16.2.15+ds/qa/tasks/thrashosds-health.yaml | 12 ceph-16.2.15+ds/qa/tasks/vstart_runner.py | 25 ceph-16.2.15+ds/qa/valgrind.supp | 16 ceph-16.2.15+ds/qa/workunits/cephadm/test_cephadm.sh | 4 ceph-16.2.15+ds/qa/workunits/cephadm/test_iscsi_etc_hosts.sh | 21 ceph-16.2.15+ds/qa/workunits/cephadm/test_iscsi_pids_limit.sh | 24 ceph-16.2.15+ds/qa/workunits/cephtool/test.sh | 14 ceph-16.2.15+ds/qa/workunits/cls/test_cls_cmpomap.sh | 3 ceph-16.2.15+ds/qa/workunits/fs/misc/subvolume.sh | 63 ceph-16.2.15+ds/qa/workunits/kernel_untar_build.sh | 4 ceph-16.2.15+ds/qa/workunits/libcephfs/test.sh | 1 ceph-16.2.15+ds/qa/workunits/mgr/test_per_module_finisher.sh | 37 ceph-16.2.15+ds/qa/workunits/mon/pg_autoscaler.sh | 10 ceph-16.2.15+ds/qa/workunits/mon/rbd_snaps_ops.sh | 3 ceph-16.2.15+ds/qa/workunits/rados/test_crash.sh | 5 ceph-16.2.15+ds/qa/workunits/rbd/cli_generic.sh | 174 ceph-16.2.15+ds/qa/workunits/rbd/diff_continuous.sh | 138 ceph-16.2.15+ds/qa/workunits/rbd/krbd_watch_errors.sh | 53 ceph-16.2.15+ds/qa/workunits/rbd/rbd-nbd.sh | 55 ceph-16.2.15+ds/qa/workunits/rbd/rbd_mirror_bootstrap.sh | 13 ceph-16.2.15+ds/qa/workunits/rbd/rbd_mirror_helpers.sh | 10 ceph-16.2.15+ds/qa/workunits/rbd/rbd_mirror_journal.sh | 24 ceph-16.2.15+ds/qa/workunits/rbd/rbd_mirror_snapshot.sh | 27 ceph-16.2.15+ds/qa/workunits/rbd/rbd_support_module_recovery.sh | 77 ceph-16.2.15+ds/qa/workunits/rgw/common.py | 103 ceph-16.2.15+ds/qa/workunits/rgw/run-bucket-check.sh | 19 ceph-16.2.15+ds/qa/workunits/rgw/run-versioning.sh | 19 ceph-16.2.15+ds/qa/workunits/rgw/test_rgw_bucket_check.py | 194 ceph-16.2.15+ds/qa/workunits/rgw/test_rgw_reshard.py | 85 ceph-16.2.15+ds/qa/workunits/rgw/test_rgw_versioning.py | 110 ceph-16.2.15+ds/src/.git_version | 4 ceph-16.2.15+ds/src/CMakeLists.txt | 4 ceph-16.2.15+ds/src/blk/kernel/KernelDevice.cc | 27 ceph-16.2.15+ds/src/ceph-crash.in | 38 ceph-16.2.15+ds/src/ceph-volume/ceph_volume/api/lvm.py | 2 ceph-16.2.15+ds/src/ceph-volume/ceph_volume/devices/lvm/batch.py | 25 ceph-16.2.15+ds/src/ceph-volume/ceph_volume/devices/lvm/deactivate.py | 2 ceph-16.2.15+ds/src/ceph-volume/ceph_volume/devices/lvm/migrate.py | 44 ceph-16.2.15+ds/src/ceph-volume/ceph_volume/devices/lvm/prepare.py | 14 ceph-16.2.15+ds/src/ceph-volume/ceph_volume/devices/lvm/zap.py | 17 ceph-16.2.15+ds/src/ceph-volume/ceph_volume/devices/raw/common.py | 6 ceph-16.2.15+ds/src/ceph-volume/ceph_volume/devices/raw/list.py | 45 ceph-16.2.15+ds/src/ceph-volume/ceph_volume/devices/raw/prepare.py | 17 ceph-16.2.15+ds/src/ceph-volume/ceph_volume/drive_group/main.py | 2 ceph-16.2.15+ds/src/ceph-volume/ceph_volume/tests/conftest.py | 14 ceph-16.2.15+ds/src/ceph-volume/ceph_volume/tests/devices/lvm/test_batch.py | 65 ceph-16.2.15+ds/src/ceph-volume/ceph_volume/tests/devices/lvm/test_deactivate.py | 2 ceph-16.2.15+ds/src/ceph-volume/ceph_volume/tests/devices/lvm/test_migrate.py | 450 ceph-16.2.15+ds/src/ceph-volume/ceph_volume/tests/functional/group_vars/bluestore_lvm_dmcrypt | 2 ceph-16.2.15+ds/src/ceph-volume/ceph_volume/tests/functional/lvm/centos8/bluestore/dmcrypt/group_vars/all | 2 ceph-16.2.15+ds/src/ceph-volume/ceph_volume/tests/functional/playbooks/deploy.yml | 11 ceph-16.2.15+ds/src/ceph-volume/ceph_volume/tests/util/test_arg_validators.py | 29 ceph-16.2.15+ds/src/ceph-volume/ceph_volume/tests/util/test_device.py | 30 ceph-16.2.15+ds/src/ceph-volume/ceph_volume/tests/util/test_disk.py | 21 ceph-16.2.15+ds/src/ceph-volume/ceph_volume/util/arg_validators.py | 12 ceph-16.2.15+ds/src/ceph-volume/ceph_volume/util/device.py | 19 ceph-16.2.15+ds/src/ceph-volume/ceph_volume/util/disk.py | 151 ceph-16.2.15+ds/src/ceph-volume/ceph_volume/util/encryption.py | 31 ceph-16.2.15+ds/src/ceph-volume/tox.ini | 2 ceph-16.2.15+ds/src/ceph_fuse.cc | 23 ceph-16.2.15+ds/src/cephadm/cephadm | 421 ceph-16.2.15+ds/src/cephadm/tests/test_cephadm.py | 81 ceph-16.2.15+ds/src/client/Client.cc | 421 ceph-16.2.15+ds/src/client/Client.h | 13 ceph-16.2.15+ds/src/client/Dentry.h | 1 ceph-16.2.15+ds/src/client/Inode.cc | 18 ceph-16.2.15+ds/src/client/MetaRequest.cc | 4 ceph-16.2.15+ds/src/client/MetaRequest.h | 6 ceph-16.2.15+ds/src/client/MetaSession.h | 11 ceph-16.2.15+ds/src/cls/cephfs/cls_cephfs.h | 5 ceph-16.2.15+ds/src/cls/cephfs/cls_cephfs_client.cc | 35 ceph-16.2.15+ds/src/cls/cephfs/cls_cephfs_client.h | 1 ceph-16.2.15+ds/src/cls/fifo/cls_fifo.cc | 66 ceph-16.2.15+ds/src/cls/fifo/cls_fifo_ops.h | 15 ceph-16.2.15+ds/src/cls/fifo/cls_fifo_types.h | 174 ceph-16.2.15+ds/src/cls/queue/cls_queue_src.cc | 12 ceph-16.2.15+ds/src/cls/rbd/cls_rbd.cc | 18 ceph-16.2.15+ds/src/cls/rgw/cls_rgw.cc | 415 ceph-16.2.15+ds/src/cls/rgw/cls_rgw_types.cc | 51 ceph-16.2.15+ds/src/common/OutputDataSocket.cc | 2 ceph-16.2.15+ds/src/common/TrackedOp.cc | 8 ceph-16.2.15+ds/src/common/bit_vector.hpp | 18 ceph-16.2.15+ds/src/common/ceph_crypto.cc | 19 ceph-16.2.15+ds/src/common/ceph_crypto.h | 1 ceph-16.2.15+ds/src/common/crc32c_aarch64.c | 10 ceph-16.2.15+ds/src/common/intrusive_lru.h | 4 ceph-16.2.15+ds/src/common/legacy_config_opts.h | 18 ceph-16.2.15+ds/src/common/options.cc | 124 ceph-16.2.15+ds/src/common/weighted_shuffle.h | 2 ceph-16.2.15+ds/src/crimson/osd/osd.cc | 23 ceph-16.2.15+ds/src/crimson/osd/pg.h | 3 ceph-16.2.15+ds/src/include/ceph_fs.h | 85 ceph-16.2.15+ds/src/include/cephfs/ceph_ll_client.h | 8 ceph-16.2.15+ds/src/include/cephfs/libcephfs.h | 25 ceph-16.2.15+ds/src/include/compat.h | 10 ceph-16.2.15+ds/src/include/intarith.h | 4 ceph-16.2.15+ds/src/include/rados/librados.hpp | 7 ceph-16.2.15+ds/src/include/rbd/librbd.h | 4 ceph-16.2.15+ds/src/include/types.h | 1 ceph-16.2.15+ds/src/include/utime.h | 2 ceph-16.2.15+ds/src/isa-l/erasure_code/aarch64/gf_2vect_mad_neon.S | 5 ceph-16.2.15+ds/src/isa-l/erasure_code/aarch64/gf_3vect_mad_neon.S | 5 ceph-16.2.15+ds/src/isa-l/erasure_code/aarch64/gf_4vect_mad_neon.S | 5 ceph-16.2.15+ds/src/isa-l/erasure_code/aarch64/gf_5vect_mad_neon.S | 5 ceph-16.2.15+ds/src/isa-l/erasure_code/aarch64/gf_6vect_mad_neon.S | 5 ceph-16.2.15+ds/src/isa-l/erasure_code/aarch64/gf_vect_mad_neon.S | 5 ceph-16.2.15+ds/src/kv/KeyValueDB.h | 9 ceph-16.2.15+ds/src/kv/RocksDBStore.cc | 84 ceph-16.2.15+ds/src/kv/RocksDBStore.h | 15 ceph-16.2.15+ds/src/libcephsqlite.cc | 4 ceph-16.2.15+ds/src/librados/IoCtxImpl.cc | 9 ceph-16.2.15+ds/src/librados/IoCtxImpl.h | 3 ceph-16.2.15+ds/src/librados/ObjectOperationImpl.h | 27 ceph-16.2.15+ds/src/librados/RadosClient.cc | 18 ceph-16.2.15+ds/src/librados/RadosClient.h | 2 ceph-16.2.15+ds/src/librados/librados_c.cc | 109 ceph-16.2.15+ds/src/librados/librados_cxx.cc | 32 ceph-16.2.15+ds/src/librados/snap_set_diff.cc | 27 ceph-16.2.15+ds/src/librbd/AsioEngine.cc | 5 ceph-16.2.15+ds/src/librbd/ImageCtx.h | 2 ceph-16.2.15+ds/src/librbd/ImageWatcher.cc | 3 ceph-16.2.15+ds/src/librbd/Journal.cc | 83 ceph-16.2.15+ds/src/librbd/Journal.h | 23 ceph-16.2.15+ds/src/librbd/ManagedLock.cc | 33 ceph-16.2.15+ds/src/librbd/ObjectMap.h | 6 ceph-16.2.15+ds/src/librbd/api/DiffIterate.cc | 128 ceph-16.2.15+ds/src/librbd/api/DiffIterate.h | 7 ceph-16.2.15+ds/src/librbd/api/Image.cc | 22 ceph-16.2.15+ds/src/librbd/api/Io.cc | 48 ceph-16.2.15+ds/src/librbd/api/Mirror.cc | 21 ceph-16.2.15+ds/src/librbd/cache/ImageWriteback.cc | 16 ceph-16.2.15+ds/src/librbd/cache/WriteLogImageDispatch.cc | 20 ceph-16.2.15+ds/src/librbd/cache/WriteLogImageDispatch.h | 17 ceph-16.2.15+ds/src/librbd/crypto/CryptoImageDispatch.h | 21 ceph-16.2.15+ds/src/librbd/crypto/luks/FormatRequest.cc | 3 ceph-16.2.15+ds/src/librbd/crypto/luks/Header.cc | 4 ceph-16.2.15+ds/src/librbd/deep_copy/ImageCopyRequest.cc | 7 ceph-16.2.15+ds/src/librbd/exclusive_lock/ImageDispatch.cc | 20 ceph-16.2.15+ds/src/librbd/exclusive_lock/ImageDispatch.h | 11 ceph-16.2.15+ds/src/librbd/io/ImageDispatch.cc | 41 ceph-16.2.15+ds/src/librbd/io/ImageDispatch.h | 17 ceph-16.2.15+ds/src/librbd/io/ImageDispatchInterface.h | 17 ceph-16.2.15+ds/src/librbd/io/ImageDispatchSpec.h | 25 ceph-16.2.15+ds/src/librbd/io/ImageDispatcher.cc | 21 ceph-16.2.15+ds/src/librbd/io/ImageRequest.cc | 122 ceph-16.2.15+ds/src/librbd/io/ImageRequest.h | 83 ceph-16.2.15+ds/src/librbd/io/ObjectRequest.cc | 15 ceph-16.2.15+ds/src/librbd/io/QosImageDispatch.cc | 17 ceph-16.2.15+ds/src/librbd/io/QosImageDispatch.h | 17 ceph-16.2.15+ds/src/librbd/io/QueueImageDispatch.cc | 15 ceph-16.2.15+ds/src/librbd/io/QueueImageDispatch.h | 17 ceph-16.2.15+ds/src/librbd/io/RefreshImageDispatch.cc | 17 ceph-16.2.15+ds/src/librbd/io/RefreshImageDispatch.h | 17 ceph-16.2.15+ds/src/librbd/io/SimpleSchedulerObjectDispatch.cc | 3 ceph-16.2.15+ds/src/librbd/io/WriteBlockImageDispatch.cc | 17 ceph-16.2.15+ds/src/librbd/io/WriteBlockImageDispatch.h | 17 ceph-16.2.15+ds/src/librbd/journal/Replay.cc | 12 ceph-16.2.15+ds/src/librbd/managed_lock/GetLockerRequest.cc | 10 ceph-16.2.15+ds/src/librbd/migration/ImageDispatch.cc | 11 ceph-16.2.15+ds/src/librbd/migration/ImageDispatch.h | 11 ceph-16.2.15+ds/src/librbd/mirror/snapshot/CreatePrimaryRequest.cc | 104 ceph-16.2.15+ds/src/librbd/mirror/snapshot/UnlinkPeerRequest.cc | 81 ceph-16.2.15+ds/src/librbd/mirror/snapshot/UnlinkPeerRequest.h | 17 ceph-16.2.15+ds/src/librbd/object_map/DiffRequest.cc | 382 ceph-16.2.15+ds/src/librbd/object_map/DiffRequest.h | 29 ceph-16.2.15+ds/src/librbd/object_map/Types.h | 15 ceph-16.2.15+ds/src/librbd/operation/SnapshotRemoveRequest.cc | 5 ceph-16.2.15+ds/src/log/Entry.h | 2 ceph-16.2.15+ds/src/log/Log.cc | 130 ceph-16.2.15+ds/src/log/Log.h | 92 ceph-16.2.15+ds/src/log/test.cc | 123 ceph-16.2.15+ds/src/mds/Beacon.cc | 22 ceph-16.2.15+ds/src/mds/CDentry.cc | 59 ceph-16.2.15+ds/src/mds/CDentry.h | 22 ceph-16.2.15+ds/src/mds/CDir.cc | 65 ceph-16.2.15+ds/src/mds/CDir.h | 13 ceph-16.2.15+ds/src/mds/CInode.cc | 5 ceph-16.2.15+ds/src/mds/CInode.h | 1 ceph-16.2.15+ds/src/mds/Capability.cc | 45 ceph-16.2.15+ds/src/mds/Capability.h | 29 ceph-16.2.15+ds/src/mds/Locker.cc | 10 ceph-16.2.15+ds/src/mds/MDBalancer.cc | 5 ceph-16.2.15+ds/src/mds/MDCache.cc | 199 ceph-16.2.15+ds/src/mds/MDCache.h | 29 ceph-16.2.15+ds/src/mds/MDLog.cc | 36 ceph-16.2.15+ds/src/mds/MDLog.h | 5 ceph-16.2.15+ds/src/mds/MDSAuthCaps.cc | 18 ceph-16.2.15+ds/src/mds/MDSAuthCaps.h | 6 ceph-16.2.15+ds/src/mds/MDSDaemon.cc | 8 ceph-16.2.15+ds/src/mds/MDSDaemon.h | 2 ceph-16.2.15+ds/src/mds/MDSMetaRequest.h | 33 ceph-16.2.15+ds/src/mds/MDSRank.cc | 36 ceph-16.2.15+ds/src/mds/MDSRank.h | 17 ceph-16.2.15+ds/src/mds/MDSTableClient.h | 4 ceph-16.2.15+ds/src/mds/Mantle.cc | 5 ceph-16.2.15+ds/src/mds/Migrator.cc | 2 ceph-16.2.15+ds/src/mds/Mutation.h | 7 ceph-16.2.15+ds/src/mds/PurgeQueue.cc | 3 ceph-16.2.15+ds/src/mds/ScrubHeader.h | 6 ceph-16.2.15+ds/src/mds/ScrubStack.cc | 72 ceph-16.2.15+ds/src/mds/ScrubStack.h | 2 ceph-16.2.15+ds/src/mds/Server.cc | 394 ceph-16.2.15+ds/src/mds/Server.h | 21 ceph-16.2.15+ds/src/mds/SessionMap.cc | 82 ceph-16.2.15+ds/src/mds/SessionMap.h | 13 ceph-16.2.15+ds/src/mds/SimpleLock.h | 1 ceph-16.2.15+ds/src/mds/SnapClient.h | 1 ceph-16.2.15+ds/src/mds/SnapRealm.cc | 4 ceph-16.2.15+ds/src/mds/StrayManager.cc | 38 ceph-16.2.15+ds/src/mds/StrayManager.h | 17 ceph-16.2.15+ds/src/mds/cephfs_features.cc | 1 ceph-16.2.15+ds/src/mds/cephfs_features.h | 4 ceph-16.2.15+ds/src/mds/events/EMetaBlob.h | 13 ceph-16.2.15+ds/src/mds/journal.cc | 75 ceph-16.2.15+ds/src/mds/locks.c | 2 ceph-16.2.15+ds/src/mds/mdstypes.cc | 13 ceph-16.2.15+ds/src/messages/MClientReply.h | 6 ceph-16.2.15+ds/src/messages/MClientRequest.h | 39 ceph-16.2.15+ds/src/messages/MDentryUnlink.h | 58 ceph-16.2.15+ds/src/messages/MMDSBeacon.h | 4 ceph-16.2.15+ds/src/messages/MMgrBeacon.h | 41 ceph-16.2.15+ds/src/messages/MOSDMap.h | 40 ceph-16.2.15+ds/src/mgr/ActivePyModule.h | 17 ceph-16.2.15+ds/src/mgr/ActivePyModules.cc | 56 ceph-16.2.15+ds/src/mgr/ActivePyModules.h | 5 ceph-16.2.15+ds/src/mgr/BaseMgrModule.cc | 28 ceph-16.2.15+ds/src/mgr/DaemonHealthMetric.h | 8 ceph-16.2.15+ds/src/mgr/DaemonServer.cc | 40 ceph-16.2.15+ds/src/mgr/DaemonServer.h | 3 ceph-16.2.15+ds/src/mgr/Mgr.cc | 2 ceph-16.2.15+ds/src/mgr/PyModuleRegistry.h | 20 ceph-16.2.15+ds/src/mon/AuthMonitor.cc | 4 ceph-16.2.15+ds/src/mon/CMakeLists.txt | 6 ceph-16.2.15+ds/src/mon/ConfigMap.cc | 4 ceph-16.2.15+ds/src/mon/ConfigMap.h | 2 ceph-16.2.15+ds/src/mon/ConfigMonitor.cc | 2 ceph-16.2.15+ds/src/mon/FSCommands.cc | 14 ceph-16.2.15+ds/src/mon/HealthMonitor.cc | 18 ceph-16.2.15+ds/src/mon/MDSMonitor.cc | 32 ceph-16.2.15+ds/src/mon/MgrMap.h | 56 ceph-16.2.15+ds/src/mon/MgrMonitor.cc | 64 ceph-16.2.15+ds/src/mon/MgrMonitor.h | 8 ceph-16.2.15+ds/src/mon/MonClient.cc | 16 ceph-16.2.15+ds/src/mon/MonCommands.h | 8 ceph-16.2.15+ds/src/mon/Monitor.cc | 50 ceph-16.2.15+ds/src/mon/Monitor.h | 2 ceph-16.2.15+ds/src/mon/OSDMonitor.cc | 42 ceph-16.2.15+ds/src/mon/PGMap.cc | 6 ceph-16.2.15+ds/src/mon/PaxosService.cc | 2 ceph-16.2.15+ds/src/mon/PaxosService.h | 3 ceph-16.2.15+ds/src/mount.fuse.ceph | 7 ceph-16.2.15+ds/src/msg/Dispatcher.h | 7 ceph-16.2.15+ds/src/msg/Message.cc | 3 ceph-16.2.15+ds/src/msg/Message.h | 1 ceph-16.2.15+ds/src/msg/async/AsyncMessenger.cc | 2 ceph-16.2.15+ds/src/msg/async/PosixStack.h | 5 ceph-16.2.15+ds/src/msg/async/Stack.cc | 62 ceph-16.2.15+ds/src/msg/async/Stack.h | 7 ceph-16.2.15+ds/src/msg/async/dpdk/DPDKStack.cc | 15 ceph-16.2.15+ds/src/msg/async/dpdk/DPDKStack.h | 7 ceph-16.2.15+ds/src/msg/async/rdma/RDMAStack.cc | 20 ceph-16.2.15+ds/src/msg/async/rdma/RDMAStack.h | 6 ceph-16.2.15+ds/src/neorados/CMakeLists.txt | 3 ceph-16.2.15+ds/src/neorados/cls/fifo.cc | 385 ceph-16.2.15+ds/src/neorados/cls/fifo.h | 1754 - ceph-16.2.15+ds/src/os/bluestore/Allocator.cc | 4 ceph-16.2.15+ds/src/os/bluestore/Allocator.h | 8 ceph-16.2.15+ds/src/os/bluestore/AvlAllocator.cc | 36 ceph-16.2.15+ds/src/os/bluestore/AvlAllocator.h | 11 ceph-16.2.15+ds/src/os/bluestore/BitmapAllocator.cc | 25 ceph-16.2.15+ds/src/os/bluestore/BitmapAllocator.h | 1 ceph-16.2.15+ds/src/os/bluestore/BlueFS.cc | 2054 - ceph-16.2.15+ds/src/os/bluestore/BlueFS.h | 249 ceph-16.2.15+ds/src/os/bluestore/BlueRocksEnv.cc | 7 ceph-16.2.15+ds/src/os/bluestore/BlueStore.cc | 381 ceph-16.2.15+ds/src/os/bluestore/BlueStore.h | 165 ceph-16.2.15+ds/src/os/bluestore/StupidAllocator.cc | 46 ceph-16.2.15+ds/src/os/bluestore/StupidAllocator.h | 9 ceph-16.2.15+ds/src/os/bluestore/bluefs_types.cc | 1 ceph-16.2.15+ds/src/os/bluestore/bluefs_types.h | 27 ceph-16.2.15+ds/src/os/bluestore/bluestore_tool.cc | 4 ceph-16.2.15+ds/src/os/bluestore/fastbmap_allocator_impl.cc | 16 ceph-16.2.15+ds/src/os/memstore/MemStore.cc | 2 ceph-16.2.15+ds/src/osd/OSD.cc | 105 ceph-16.2.15+ds/src/osd/OSD.h | 7 ceph-16.2.15+ds/src/osd/OSDCap.cc | 6 ceph-16.2.15+ds/src/osd/OSDMap.cc | 30 ceph-16.2.15+ds/src/osd/OSDMap.h | 1 ceph-16.2.15+ds/src/osd/OpRequest.cc | 1 ceph-16.2.15+ds/src/osd/OpRequest.h | 8 ceph-16.2.15+ds/src/osd/PG.cc | 4 ceph-16.2.15+ds/src/osd/PG.h | 2 ceph-16.2.15+ds/src/osd/PeeringState.cc | 14 ceph-16.2.15+ds/src/osd/PeeringState.h | 2 ceph-16.2.15+ds/src/osd/PrimaryLogPG.cc | 10 ceph-16.2.15+ds/src/osd/PrimaryLogPG.h | 13 ceph-16.2.15+ds/src/osd/osd_op_util.cc | 19 ceph-16.2.15+ds/src/osd/osd_op_util.h | 2 ceph-16.2.15+ds/src/osd/osd_types.cc | 12 ceph-16.2.15+ds/src/osd/osd_types.h | 8 ceph-16.2.15+ds/src/osd/scrub_machine.cc | 2 ceph-16.2.15+ds/src/osdc/Journaler.cc | 41 ceph-16.2.15+ds/src/osdc/Journaler.h | 5 ceph-16.2.15+ds/src/osdc/Objecter.cc | 7 ceph-16.2.15+ds/src/perfglue/cpu_profiler.cc | 4 ceph-16.2.15+ds/src/pybind/ceph_argparse.py | 16 ceph-16.2.15+ds/src/pybind/cephfs/cephfs.pyx | 9 ceph-16.2.15+ds/src/pybind/mgr/balancer/module.py | 3 ceph-16.2.15+ds/src/pybind/mgr/ceph_module.pyi | 1 ceph-16.2.15+ds/src/pybind/mgr/cephadm/inventory.py | 54 ceph-16.2.15+ds/src/pybind/mgr/cephadm/migrations.py | 26 ceph-16.2.15+ds/src/pybind/mgr/cephadm/module.py | 59 ceph-16.2.15+ds/src/pybind/mgr/cephadm/serve.py | 140 ceph-16.2.15+ds/src/pybind/mgr/cephadm/services/cephadmservice.py | 16 ceph-16.2.15+ds/src/pybind/mgr/cephadm/services/ingress.py | 41 ceph-16.2.15+ds/src/pybind/mgr/cephadm/services/iscsi.py | 8 ceph-16.2.15+ds/src/pybind/mgr/cephadm/services/monitoring.py | 9 ceph-16.2.15+ds/src/pybind/mgr/cephadm/services/osd.py | 10 ceph-16.2.15+ds/src/pybind/mgr/cephadm/templates/services/ingress/haproxy.cfg.j2 | 8 ceph-16.2.15+ds/src/pybind/mgr/cephadm/tests/fixtures.py | 4 ceph-16.2.15+ds/src/pybind/mgr/cephadm/tests/test_cephadm.py | 286 ceph-16.2.15+ds/src/pybind/mgr/cephadm/tests/test_facts.py | 21 ceph-16.2.15+ds/src/pybind/mgr/cephadm/tests/test_migration.py | 29 ceph-16.2.15+ds/src/pybind/mgr/cephadm/tests/test_scheduling.py | 34 ceph-16.2.15+ds/src/pybind/mgr/cephadm/tests/test_services.py | 340 ceph-16.2.15+ds/src/pybind/mgr/cephadm/tests/test_upgrade.py | 6 ceph-16.2.15+ds/src/pybind/mgr/cephadm/upgrade.py | 2 ceph-16.2.15+ds/src/pybind/mgr/cephadm/utils.py | 2 ceph-16.2.15+ds/src/pybind/mgr/dashboard/ci/cephadm/bootstrap-cluster.sh | 4 ceph-16.2.15+ds/src/pybind/mgr/dashboard/ci/cephadm/ceph_cluster.yml | 2 ceph-16.2.15+ds/src/pybind/mgr/dashboard/ci/cephadm/start-cluster.sh | 18 ceph-16.2.15+ds/src/pybind/mgr/dashboard/constraints.txt | 14 ceph-16.2.15+ds/src/pybind/mgr/dashboard/controllers/cephfs.py | 3 ceph-16.2.15+ds/src/pybind/mgr/dashboard/controllers/prometheus.py | 14 ceph-16.2.15+ds/src/pybind/mgr/dashboard/controllers/rbd.py | 25 ceph-16.2.15+ds/src/pybind/mgr/dashboard/controllers/rbd_mirroring.py | 2 ceph-16.2.15+ds/src/pybind/mgr/dashboard/controllers/saml2.py | 5 ceph-16.2.15+ds/src/pybind/mgr/dashboard/frontend/cypress/integration/block/mirroring.po.ts | 5 ceph-16.2.15+ds/src/pybind/mgr/dashboard/frontend/cypress/integration/cluster/hosts.po.ts | 4 ceph-16.2.15+ds/src/pybind/mgr/dashboard/frontend/cypress/integration/cluster/services.po.ts | 4 ceph-16.2.15+ds/src/pybind/mgr/dashboard/frontend/cypress/integration/orchestrator/workflow/06-cluster-check.e2e-spec.ts | 5 ceph-16.2.15+ds/src/pybind/mgr/dashboard/frontend/cypress/support/index.ts | 9 ceph-16.2.15+ds/src/pybind/mgr/dashboard/frontend/dist/en-US/281.0d0cd268ddc6a6760dd4.js | 1 ceph-16.2.15+ds/src/pybind/mgr/dashboard/frontend/dist/en-US/281.57d0494f276bf42af928.js | 1 ceph-16.2.15+ds/src/pybind/mgr/dashboard/frontend/dist/en-US/483.57cfde62253651646349.js | 1 ceph-16.2.15+ds/src/pybind/mgr/dashboard/frontend/dist/en-US/483.f42c1d67e206231ecdac.js | 1 ceph-16.2.15+ds/src/pybind/mgr/dashboard/frontend/dist/en-US/index.html | 4 ceph-16.2.15+ds/src/pybind/mgr/dashboard/frontend/dist/en-US/main.35ec9db61bcbaf2e5786.js | 3 ceph-16.2.15+ds/src/pybind/mgr/dashboard/frontend/dist/en-US/main.c3b711a3156fe72f66f4.js | 3 ceph-16.2.15+ds/src/pybind/mgr/dashboard/frontend/dist/en-US/runtime.57d4c22827fd93a5134f.js | 1 ceph-16.2.15+ds/src/pybind/mgr/dashboard/frontend/dist/en-US/runtime.dfeb6a20b4d203b567dc.js | 1 ceph-16.2.15+ds/src/pybind/mgr/dashboard/frontend/dist/en-US/styles.c0c3da54c9c7b1207ad8.css | 20 ceph-16.2.15+ds/src/pybind/mgr/dashboard/frontend/dist/en-US/styles.f05c06a6a64f4730faae.css | 20 ceph-16.2.15+ds/src/pybind/mgr/dashboard/frontend/proxy.conf.json.sample | 5 ceph-16.2.15+ds/src/pybind/mgr/dashboard/frontend/src/app/ceph/block/mirroring/image-list/image-list.component.html | 11 ceph-16.2.15+ds/src/pybind/mgr/dashboard/frontend/src/app/ceph/block/mirroring/image-list/image-list.component.ts | 2 ceph-16.2.15+ds/src/pybind/mgr/dashboard/frontend/src/app/ceph/block/mirroring/mirroring.module.ts | 5 ceph-16.2.15+ds/src/pybind/mgr/dashboard/frontend/src/app/ceph/block/mirroring/pool-list/pool-list.component.html | 10 ceph-16.2.15+ds/src/pybind/mgr/dashboard/frontend/src/app/ceph/block/mirroring/pool-list/pool-list.component.ts | 18 ceph-16.2.15+ds/src/pybind/mgr/dashboard/frontend/src/app/ceph/block/rbd-form/rbd-form-edit-request.model.ts | 1 ceph-16.2.15+ds/src/pybind/mgr/dashboard/frontend/src/app/ceph/block/rbd-list/rbd-list.component.html | 10 ceph-16.2.15+ds/src/pybind/mgr/dashboard/frontend/src/app/ceph/block/rbd-list/rbd-list.component.ts | 30 ceph-16.2.15+ds/src/pybind/mgr/dashboard/frontend/src/app/ceph/block/rbd-snapshot-form/rbd-snapshot-form-modal.component.html | 22 ceph-16.2.15+ds/src/pybind/mgr/dashboard/frontend/src/app/ceph/block/rbd-snapshot-form/rbd-snapshot-form-modal.component.ts | 12 ceph-16.2.15+ds/src/pybind/mgr/dashboard/frontend/src/app/ceph/cluster/create-cluster/create-cluster-review.component.html | 2 ceph-16.2.15+ds/src/pybind/mgr/dashboard/frontend/src/app/ceph/cluster/create-cluster/create-cluster.component.html | 9 ceph-16.2.15+ds/src/pybind/mgr/dashboard/frontend/src/app/ceph/cluster/create-cluster/create-cluster.component.spec.ts | 24 ceph-16.2.15+ds/src/pybind/mgr/dashboard/frontend/src/app/ceph/cluster/create-cluster/create-cluster.component.ts | 117 ceph-16.2.15+ds/src/pybind/mgr/dashboard/frontend/src/app/ceph/cluster/hosts/host-form/host-form.component.html | 3 ceph-16.2.15+ds/src/pybind/mgr/dashboard/frontend/src/app/ceph/cluster/hosts/host-form/host-form.component.ts | 3 ceph-16.2.15+ds/src/pybind/mgr/dashboard/frontend/src/app/ceph/cluster/hosts/hosts.component.ts | 9 ceph-16.2.15+ds/src/pybind/mgr/dashboard/frontend/src/app/ceph/cluster/inventory/inventory-devices/inventory-devices.component.ts | 12 ceph-16.2.15+ds/src/pybind/mgr/dashboard/frontend/src/app/ceph/cluster/osd/osd-details/osd-details.component.html | 4 ceph-16.2.15+ds/src/pybind/mgr/dashboard/frontend/src/app/ceph/cluster/osd/osd-devices-selection-groups/osd-devices-selection-groups.component.ts | 13 ceph-16.2.15+ds/src/pybind/mgr/dashboard/frontend/src/app/ceph/cluster/osd/osd-devices-selection-modal/osd-devices-selection-modal.component.html | 3 ceph-16.2.15+ds/src/pybind/mgr/dashboard/frontend/src/app/ceph/cluster/osd/osd-devices-selection-modal/osd-devices-selection-modal.component.ts | 12 ceph-16.2.15+ds/src/pybind/mgr/dashboard/frontend/src/app/ceph/cluster/osd/osd-form/osd-form.component.html | 6 ceph-16.2.15+ds/src/pybind/mgr/dashboard/frontend/src/app/ceph/cluster/osd/osd-form/osd-form.component.ts | 1 ceph-16.2.15+ds/src/pybind/mgr/dashboard/frontend/src/app/ceph/cluster/services/service-form/service-form.component.html | 59 ceph-16.2.15+ds/src/pybind/mgr/dashboard/frontend/src/app/ceph/cluster/services/service-form/service-form.component.spec.ts | 42 ceph-16.2.15+ds/src/pybind/mgr/dashboard/frontend/src/app/ceph/cluster/services/service-form/service-form.component.ts | 13 ceph-16.2.15+ds/src/pybind/mgr/dashboard/frontend/src/app/ceph/rgw/rgw-user-form/rgw-user-form.component.html | 3 ceph-16.2.15+ds/src/pybind/mgr/dashboard/frontend/src/app/ceph/rgw/rgw-user-form/rgw-user-form.component.ts | 2 ceph-16.2.15+ds/src/pybind/mgr/dashboard/frontend/src/app/ceph/shared/device-list/device-list.component.html | 29 ceph-16.2.15+ds/src/pybind/mgr/dashboard/frontend/src/app/ceph/shared/device-list/device-list.component.ts | 13 ceph-16.2.15+ds/src/pybind/mgr/dashboard/frontend/src/app/shared/api/rbd.service.spec.ts | 7 ceph-16.2.15+ds/src/pybind/mgr/dashboard/frontend/src/app/shared/api/rbd.service.ts | 9 ceph-16.2.15+ds/src/pybind/mgr/dashboard/frontend/src/app/shared/components/cd-label/cd-label.component.html | 11 ceph-16.2.15+ds/src/pybind/mgr/dashboard/frontend/src/app/shared/components/cd-label/cd-label.component.spec.ts | 25 ceph-16.2.15+ds/src/pybind/mgr/dashboard/frontend/src/app/shared/components/cd-label/cd-label.component.ts | 11 ceph-16.2.15+ds/src/pybind/mgr/dashboard/frontend/src/app/shared/components/cd-label/color-class-from-text.pipe.ts | 28 ceph-16.2.15+ds/src/pybind/mgr/dashboard/frontend/src/app/shared/components/components.module.ts | 9 ceph-16.2.15+ds/src/pybind/mgr/dashboard/frontend/src/app/shared/datatable/table/table.component.spec.ts | 22 ceph-16.2.15+ds/src/pybind/mgr/dashboard/frontend/src/app/shared/datatable/table/table.component.ts | 8 ceph-16.2.15+ds/src/pybind/mgr/dashboard/frontend/src/app/shared/models/service.interface.ts | 2 ceph-16.2.15+ds/src/pybind/mgr/dashboard/frontend/src/styles.scss | 30 ceph-16.2.15+ds/src/pybind/mgr/dashboard/frontend/src/styles/defaults/_bootstrap-defaults.scss | 8 ceph-16.2.15+ds/src/pybind/mgr/dashboard/module.py | 14 ceph-16.2.15+ds/src/pybind/mgr/dashboard/openapi.yaml | 33 ceph-16.2.15+ds/src/pybind/mgr/dashboard/services/auth.py | 10 ceph-16.2.15+ds/src/pybind/mgr/dashboard/services/rbd.py | 9 ceph-16.2.15+ds/src/pybind/mgr/dashboard/services/rgw_client.py | 15 ceph-16.2.15+ds/src/pybind/mgr/dashboard/services/tcmu_service.py | 1 ceph-16.2.15+ds/src/pybind/mgr/dashboard/settings.py | 2 ceph-16.2.15+ds/src/pybind/mgr/dashboard/tests/test_host.py | 10 ceph-16.2.15+ds/src/pybind/mgr/dashboard/tests/test_rgw.py | 24 ceph-16.2.15+ds/src/pybind/mgr/dashboard/tests/test_rgw_client.py | 2 ceph-16.2.15+ds/src/pybind/mgr/mgr_module.py | 10 ceph-16.2.15+ds/src/pybind/mgr/mgr_util.py | 40 ceph-16.2.15+ds/src/pybind/mgr/nfs/export.py | 23 ceph-16.2.15+ds/src/pybind/mgr/nfs/export_utils.py | 37 ceph-16.2.15+ds/src/pybind/mgr/nfs/module.py | 33 ceph-16.2.15+ds/src/pybind/mgr/nfs/tests/test_nfs.py | 117 ceph-16.2.15+ds/src/pybind/mgr/nfs/utils.py | 20 ceph-16.2.15+ds/src/pybind/mgr/orchestrator/_interface.py | 6 ceph-16.2.15+ds/src/pybind/mgr/orchestrator/module.py | 20 ceph-16.2.15+ds/src/pybind/mgr/pg_autoscaler/module.py | 8 ceph-16.2.15+ds/src/pybind/mgr/prometheus/module.py | 53 ceph-16.2.15+ds/src/pybind/mgr/rbd_support/mirror_snapshot_schedule.py | 53 ceph-16.2.15+ds/src/pybind/mgr/rbd_support/module.py | 57 ceph-16.2.15+ds/src/pybind/mgr/rbd_support/perf.py | 36 ceph-16.2.15+ds/src/pybind/mgr/rbd_support/schedule.py | 21 ceph-16.2.15+ds/src/pybind/mgr/rbd_support/task.py | 23 ceph-16.2.15+ds/src/pybind/mgr/rbd_support/trash_purge_schedule.py | 35 ceph-16.2.15+ds/src/pybind/mgr/snap_schedule/fs/schedule.py | 6 ceph-16.2.15+ds/src/pybind/mgr/snap_schedule/fs/schedule_client.py | 47 ceph-16.2.15+ds/src/pybind/mgr/snap_schedule/module.py | 36 ceph-16.2.15+ds/src/pybind/mgr/snap_schedule/tests/fs/test_schedule_client.py | 4 ceph-16.2.15+ds/src/pybind/mgr/status/module.py | 4 ceph-16.2.15+ds/src/pybind/mgr/test_orchestrator/dummy_data.json | 20 ceph-16.2.15+ds/src/pybind/mgr/tests/__init__.py | 8 ceph-16.2.15+ds/src/pybind/mgr/tests/test_mgr_util.py | 19 ceph-16.2.15+ds/src/pybind/mgr/tests/test_tls.py | 3 ceph-16.2.15+ds/src/pybind/mgr/volumes/fs/async_job.py | 8 ceph-16.2.15+ds/src/pybind/mgr/volumes/fs/fs_util.py | 21 ceph-16.2.15+ds/src/pybind/mgr/volumes/fs/operations/volume.py | 14 ceph-16.2.15+ds/src/pybind/mgr/volumes/fs/volume.py | 11 ceph-16.2.15+ds/src/pybind/rados/rados.pyx | 27 ceph-16.2.15+ds/src/pybind/rbd/rbd.pyx | 31 ceph-16.2.15+ds/src/python-common/ceph/deployment/drive_group.py | 14 ceph-16.2.15+ds/src/python-common/ceph/deployment/drive_selection/selector.py | 14 ceph-16.2.15+ds/src/python-common/ceph/deployment/inventory.py | 7 ceph-16.2.15+ds/src/python-common/ceph/deployment/service_spec.py | 134 ceph-16.2.15+ds/src/python-common/ceph/tests/test_drive_group.py | 11 ceph-16.2.15+ds/src/python-common/ceph/tests/test_inventory.py | 52 ceph-16.2.15+ds/src/python-common/ceph/tests/test_service_spec.py | 4 ceph-16.2.15+ds/src/rgw/CMakeLists.txt | 9 ceph-16.2.15+ds/src/rgw/cls_fifo_legacy.cc | 447 ceph-16.2.15+ds/src/rgw/cls_fifo_legacy.h | 26 ceph-16.2.15+ds/src/rgw/rgw_admin.cc | 126 ceph-16.2.15+ds/src/rgw/rgw_asio_client.cc | 1 ceph-16.2.15+ds/src/rgw/rgw_asio_client.h | 3 ceph-16.2.15+ds/src/rgw/rgw_asio_frontend.cc | 75 ceph-16.2.15+ds/src/rgw/rgw_asio_frontend_timer.h | 3 ceph-16.2.15+ds/src/rgw/rgw_auth_keystone.cc | 62 ceph-16.2.15+ds/src/rgw/rgw_auth_keystone.h | 3 ceph-16.2.15+ds/src/rgw/rgw_auth_s3.cc | 59 ceph-16.2.15+ds/src/rgw/rgw_auth_s3.h | 3 ceph-16.2.15+ds/src/rgw/rgw_bucket.cc | 442 ceph-16.2.15+ds/src/rgw/rgw_bucket.h | 24 ceph-16.2.15+ds/src/rgw/rgw_common.cc | 63 ceph-16.2.15+ds/src/rgw/rgw_common.h | 12 ceph-16.2.15+ds/src/rgw/rgw_coroutine.cc | 4 ceph-16.2.15+ds/src/rgw/rgw_cors.h | 13 ceph-16.2.15+ds/src/rgw/rgw_env.cc | 15 ceph-16.2.15+ds/src/rgw/rgw_iam_policy.cc | 23 ceph-16.2.15+ds/src/rgw/rgw_iam_policy.h | 4 ceph-16.2.15+ds/src/rgw/rgw_kms.cc | 16 ceph-16.2.15+ds/src/rgw/rgw_lc.cc | 22 ceph-16.2.15+ds/src/rgw/rgw_ldap.h | 4 ceph-16.2.15+ds/src/rgw/rgw_notify.cc | 19 ceph-16.2.15+ds/src/rgw/rgw_notify.h | 3 ceph-16.2.15+ds/src/rgw/rgw_object_lock.cc | 2 ceph-16.2.15+ds/src/rgw/rgw_op.cc | 496 ceph-16.2.15+ds/src/rgw/rgw_op.h | 31 ceph-16.2.15+ds/src/rgw/rgw_opa.cc | 21 ceph-16.2.15+ds/src/rgw/rgw_putobj_processor.cc | 8 ceph-16.2.15+ds/src/rgw/rgw_putobj_processor.h | 3 ceph-16.2.15+ds/src/rgw/rgw_rados.cc | 550 ceph-16.2.15+ds/src/rgw/rgw_rados.h | 27 ceph-16.2.15+ds/src/rgw/rgw_rest_s3.cc | 80 ceph-16.2.15+ds/src/rgw/rgw_rest_s3.h | 5 ceph-16.2.15+ds/src/rgw/rgw_rest_sts.cc | 44 ceph-16.2.15+ds/src/rgw/rgw_rest_swift.cc | 11 ceph-16.2.15+ds/src/rgw/rgw_rest_user.cc | 58 ceph-16.2.15+ds/src/rgw/rgw_sal_rados.cc | 2 ceph-16.2.15+ds/src/rgw/rgw_sts.cc | 25 ceph-16.2.15+ds/src/rgw/rgw_sts.h | 1 ceph-16.2.15+ds/src/rgw/rgw_swift_auth.cc | 12 ceph-16.2.15+ds/src/rgw/rgw_tag.cc | 3 ceph-16.2.15+ds/src/rgw/rgw_tag_s3.cc | 3 ceph-16.2.15+ds/src/rgw/rgw_trim_bilog.cc | 6 ceph-16.2.15+ds/src/rgw/rgw_user.cc | 6 ceph-16.2.15+ds/src/rgw/rgw_user.h | 2 ceph-16.2.15+ds/src/rgw/services/svc_notify.cc | 90 ceph-16.2.15+ds/src/rgw/services/svc_notify.h | 2 ceph-16.2.15+ds/src/rgw/services/svc_rados.cc | 7 ceph-16.2.15+ds/src/rgw/services/svc_rados.h | 1 ceph-16.2.15+ds/src/rgw/services/svc_zone.cc | 15 ceph-16.2.15+ds/src/rgw/services/svc_zone.h | 1 ceph-16.2.15+ds/src/script/cpatch | 2 ceph-16.2.15+ds/src/test/CMakeLists.txt | 7 ceph-16.2.15+ds/src/test/centos-8/ceph.spec.in | 4 ceph-16.2.15+ds/src/test/centos-8/install-deps.sh | 106 ceph-16.2.15+ds/src/test/cli-integration/rbd/snap-diff.t | 4 ceph-16.2.15+ds/src/test/cli/radosgw-admin/help.t | 11 ceph-16.2.15+ds/src/test/client/ops.cc | 6 ceph-16.2.15+ds/src/test/cls_fifo/CMakeLists.txt | 34 ceph-16.2.15+ds/src/test/cls_fifo/bench_cls_fifo.cc | 462 ceph-16.2.15+ds/src/test/cls_fifo/test_cls_fifo.cc | 739 ceph-16.2.15+ds/src/test/cls_rbd/test_cls_rbd.cc | 8 ceph-16.2.15+ds/src/test/cls_refcount/test_cls_refcount.cc | 380 ceph-16.2.15+ds/src/test/cls_rgw/test_cls_rgw.cc | 88 ceph-16.2.15+ds/src/test/common/test_intrusive_lru.cc | 1 ceph-16.2.15+ds/src/test/debian-jessie/debian/changelog | 15 ceph-16.2.15+ds/src/test/debian-jessie/debian/control | 1 ceph-16.2.15+ds/src/test/debian-jessie/debian/patches/32bit-fixes.patch | 16 ceph-16.2.15+ds/src/test/debian-jessie/debian/patches/CVE-2022-3650_1_ceph-crash_drop_privleges_to_run_as_ceph_user_rather_than_root.patch | 65 ceph-16.2.15+ds/src/test/debian-jessie/debian/patches/CVE-2022-3650_2_ceph-crash_fix_stderr_handling.patch | 26 ceph-16.2.15+ds/src/test/debian-jessie/debian/patches/CVE-2024-48916.patch | 28 ceph-16.2.15+ds/src/test/debian-jessie/debian/patches/bug1917414.patch | 143 ceph-16.2.15+ds/src/test/debian-jessie/debian/patches/series | 4 ceph-16.2.15+ds/src/test/debian-jessie/debian/watch | 2 ceph-16.2.15+ds/src/test/debian-jessie/install-deps.sh | 106 ceph-16.2.15+ds/src/test/fedora-31/ceph.spec.in | 4 ceph-16.2.15+ds/src/test/fedora-31/install-deps.sh | 106 ceph-16.2.15+ds/src/test/fedora-32/ceph.spec.in | 4 ceph-16.2.15+ds/src/test/fedora-32/install-deps.sh | 106 ceph-16.2.15+ds/src/test/fedora-33/ceph.spec.in | 4 ceph-16.2.15+ds/src/test/fedora-33/install-deps.sh | 106 ceph-16.2.15+ds/src/test/fio/fio_ceph_messenger.cc | 2 ceph-16.2.15+ds/src/test/lazy-omap-stats/CMakeLists.txt | 2 ceph-16.2.15+ds/src/test/lazy-omap-stats/lazy_omap_stats_test.cc | 147 ceph-16.2.15+ds/src/test/lazy-omap-stats/lazy_omap_stats_test.h | 14 ceph-16.2.15+ds/src/test/libcephfs/CMakeLists.txt | 14 ceph-16.2.15+ds/src/test/libcephfs/multiclient.cc | 84 ceph-16.2.15+ds/src/test/libcephfs/suidsgid.cc | 331 ceph-16.2.15+ds/src/test/libcephfs/test.cc | 102 ceph-16.2.15+ds/src/test/libcephfs/vxattr.cc | 4 ceph-16.2.15+ds/src/test/librados/TestCase.cc | 13 ceph-16.2.15+ds/src/test/librados/aio.cc | 37 ceph-16.2.15+ds/src/test/librados/aio_cxx.cc | 311 ceph-16.2.15+ds/src/test/librados/misc.cc | 42 ceph-16.2.15+ds/src/test/librados/snapshots_cxx.cc | 60 ceph-16.2.15+ds/src/test/librados/test_shared.h | 2 ceph-16.2.15+ds/src/test/librados/testcase_cxx.cc | 25 ceph-16.2.15+ds/src/test/librados/watch_notify.cc | 7 ceph-16.2.15+ds/src/test/librados_test_stub/LibradosTestStub.cc | 4 ceph-16.2.15+ds/src/test/librados_test_stub/MockTestMemIoCtxImpl.h | 12 ceph-16.2.15+ds/src/test/librados_test_stub/NeoradosTestStub.cc | 2 ceph-16.2.15+ds/src/test/librados_test_stub/TestIoCtxImpl.cc | 2 ceph-16.2.15+ds/src/test/librados_test_stub/TestIoCtxImpl.h | 2 ceph-16.2.15+ds/src/test/librados_test_stub/TestMemCluster.cc | 9 ceph-16.2.15+ds/src/test/librados_test_stub/TestWatchNotify.cc | 11 ceph-16.2.15+ds/src/test/librbd/CMakeLists.txt | 3 ceph-16.2.15+ds/src/test/librbd/deep_copy/test_mock_ImageCopyRequest.cc | 1 ceph-16.2.15+ds/src/test/librbd/fsx.cc | 26 ceph-16.2.15+ds/src/test/librbd/io/test_mock_ImageRequest.cc | 199 ceph-16.2.15+ds/src/test/librbd/io/test_mock_ObjectRequest.cc | 115 ceph-16.2.15+ds/src/test/librbd/journal/test_Entries.cc | 63 ceph-16.2.15+ds/src/test/librbd/journal/test_Stress.cc | 121 ceph-16.2.15+ds/src/test/librbd/journal/test_mock_Replay.cc | 15 ceph-16.2.15+ds/src/test/librbd/managed_lock/test_mock_GetLockerRequest.cc | 44 ceph-16.2.15+ds/src/test/librbd/mirror/snapshot/test_mock_CreatePrimaryRequest.cc | 429 ceph-16.2.15+ds/src/test/librbd/mirror/snapshot/test_mock_UnlinkPeerRequest.cc | 136 ceph-16.2.15+ds/src/test/librbd/mock/MockObjectMap.h | 12 ceph-16.2.15+ds/src/test/librbd/mock/io/MockImageDispatch.h | 17 ceph-16.2.15+ds/src/test/librbd/object_map/test_mock_DiffRequest.cc | 2225 + ceph-16.2.15+ds/src/test/librbd/operation/test_mock_SnapshotRemoveRequest.cc | 56 ceph-16.2.15+ds/src/test/librbd/test_internal.cc | 72 ceph-16.2.15+ds/src/test/librbd/test_librbd.cc | 556 ceph-16.2.15+ds/src/test/librbd/test_main.cc | 2 ceph-16.2.15+ds/src/test/librbd/test_mirroring.cc | 4 ceph-16.2.15+ds/src/test/librbd/test_mock_Journal.cc | 2 ceph-16.2.15+ds/src/test/librbd/test_mock_ManagedLock.cc | 29 ceph-16.2.15+ds/src/test/mds/TestMDSAuthCaps.cc | 32 ceph-16.2.15+ds/src/test/mon/test_mon_workloadgen.cc | 6 ceph-16.2.15+ds/src/test/msgr/perf_msgr_client.cc | 2 ceph-16.2.15+ds/src/test/msgr/perf_msgr_server.cc | 2 ceph-16.2.15+ds/src/test/msgr/test_msgr.cc | 6 ceph-16.2.15+ds/src/test/objectstore/Allocator_test.cc | 3 ceph-16.2.15+ds/src/test/objectstore/fastbmap_allocator_test.cc | 181 ceph-16.2.15+ds/src/test/objectstore/store_test.cc | 27 ceph-16.2.15+ds/src/test/objectstore/test_bluefs.cc | 224 ceph-16.2.15+ds/src/test/opensuse-13.2/ceph.spec.in | 4 ceph-16.2.15+ds/src/test/opensuse-13.2/install-deps.sh | 106 ceph-16.2.15+ds/src/test/osd/osdcap.cc | 43 ceph-16.2.15+ds/src/test/pybind/test_ceph_argparse.py | 28 ceph-16.2.15+ds/src/test/pybind/test_rados.py | 5 ceph-16.2.15+ds/src/test/pybind/test_rbd.py | 8 ceph-16.2.15+ds/src/test/rbd_mirror/image_replayer/snapshot/test_mock_Replayer.cc | 25 ceph-16.2.15+ds/src/test/rbd_mirror/image_replayer/test_mock_BootstrapRequest.cc | 53 ceph-16.2.15+ds/src/test/rbd_mirror/test_mock_ImageReplayer.cc | 39 ceph-16.2.15+ds/src/test/rbd_mirror/test_mock_MirrorStatusUpdater.cc | 12 ceph-16.2.15+ds/src/test/rgw/rgw_multi/tests_ps.py | 4958 --- ceph-16.2.15+ds/src/test/rgw/rgw_multi/zone_ps.py | 428 ceph-16.2.15+ds/src/test/rgw/test_cls_fifo_legacy.cc | 1 ceph-16.2.15+ds/src/test/rgw/test_multi.md | 3 ceph-16.2.15+ds/src/test/rgw/test_multi.py | 26 ceph-16.2.15+ds/src/test/rgw/test_rgw_iam_policy.cc | 354 ceph-16.2.15+ds/src/test/system/systest_runnable.cc | 6 ceph-16.2.15+ds/src/test/test_weighted_shuffle.cc | 52 ceph-16.2.15+ds/src/test/ubuntu-16.04/debian/changelog | 15 ceph-16.2.15+ds/src/test/ubuntu-16.04/debian/control | 1 ceph-16.2.15+ds/src/test/ubuntu-16.04/debian/patches/32bit-fixes.patch | 16 ceph-16.2.15+ds/src/test/ubuntu-16.04/debian/patches/CVE-2022-3650_1_ceph-crash_drop_privleges_to_run_as_ceph_user_rather_than_root.patch | 65 ceph-16.2.15+ds/src/test/ubuntu-16.04/debian/patches/CVE-2022-3650_2_ceph-crash_fix_stderr_handling.patch | 26 ceph-16.2.15+ds/src/test/ubuntu-16.04/debian/patches/CVE-2024-48916.patch | 28 ceph-16.2.15+ds/src/test/ubuntu-16.04/debian/patches/bug1917414.patch | 143 ceph-16.2.15+ds/src/test/ubuntu-16.04/debian/patches/series | 4 ceph-16.2.15+ds/src/test/ubuntu-16.04/debian/watch | 2 ceph-16.2.15+ds/src/test/ubuntu-16.04/install-deps.sh | 106 ceph-16.2.15+ds/src/test/ubuntu-18.04/debian/changelog | 15 ceph-16.2.15+ds/src/test/ubuntu-18.04/debian/control | 1 ceph-16.2.15+ds/src/test/ubuntu-18.04/debian/patches/32bit-fixes.patch | 16 ceph-16.2.15+ds/src/test/ubuntu-18.04/debian/patches/CVE-2022-3650_1_ceph-crash_drop_privleges_to_run_as_ceph_user_rather_than_root.patch | 65 ceph-16.2.15+ds/src/test/ubuntu-18.04/debian/patches/CVE-2022-3650_2_ceph-crash_fix_stderr_handling.patch | 26 ceph-16.2.15+ds/src/test/ubuntu-18.04/debian/patches/CVE-2024-48916.patch | 28 ceph-16.2.15+ds/src/test/ubuntu-18.04/debian/patches/bug1917414.patch | 143 ceph-16.2.15+ds/src/test/ubuntu-18.04/debian/patches/series | 4 ceph-16.2.15+ds/src/test/ubuntu-18.04/debian/watch | 2 ceph-16.2.15+ds/src/test/ubuntu-18.04/install-deps.sh | 106 ceph-16.2.15+ds/src/test/ubuntu-20.04/debian/changelog | 15 ceph-16.2.15+ds/src/test/ubuntu-20.04/debian/control | 1 ceph-16.2.15+ds/src/test/ubuntu-20.04/debian/patches/32bit-fixes.patch | 16 ceph-16.2.15+ds/src/test/ubuntu-20.04/debian/patches/CVE-2022-3650_1_ceph-crash_drop_privleges_to_run_as_ceph_user_rather_than_root.patch | 65 ceph-16.2.15+ds/src/test/ubuntu-20.04/debian/patches/CVE-2022-3650_2_ceph-crash_fix_stderr_handling.patch | 26 ceph-16.2.15+ds/src/test/ubuntu-20.04/debian/patches/CVE-2024-48916.patch | 28 ceph-16.2.15+ds/src/test/ubuntu-20.04/debian/patches/bug1917414.patch | 143 ceph-16.2.15+ds/src/test/ubuntu-20.04/debian/patches/series | 4 ceph-16.2.15+ds/src/test/ubuntu-20.04/debian/watch | 2 ceph-16.2.15+ds/src/test/ubuntu-20.04/install-deps.sh | 106 ceph-16.2.15+ds/src/tools/ceph-dencoder/rbd_types.h | 2 ceph-16.2.15+ds/src/tools/ceph_objectstore_tool.cc | 46 ceph-16.2.15+ds/src/tools/cephfs/DataScan.cc | 275 ceph-16.2.15+ds/src/tools/cephfs/DataScan.h | 2 ceph-16.2.15+ds/src/tools/cephfs/JournalTool.cc | 2 ceph-16.2.15+ds/src/tools/cephfs/top/CMakeLists.txt | 4 ceph-16.2.15+ds/src/tools/cephfs/top/cephfs-top | 686 ceph-16.2.15+ds/src/tools/cephfs_mirror/FSMirror.h | 24 ceph-16.2.15+ds/src/tools/cephfs_mirror/InstanceWatcher.cc | 3 ceph-16.2.15+ds/src/tools/cephfs_mirror/InstanceWatcher.h | 13 ceph-16.2.15+ds/src/tools/cephfs_mirror/Mirror.cc | 47 ceph-16.2.15+ds/src/tools/cephfs_mirror/Mirror.h | 19 ceph-16.2.15+ds/src/tools/cephfs_mirror/MirrorWatcher.cc | 3 ceph-16.2.15+ds/src/tools/cephfs_mirror/MirrorWatcher.h | 13 ceph-16.2.15+ds/src/tools/cephfs_mirror/PeerReplayer.cc | 30 ceph-16.2.15+ds/src/tools/cephfs_mirror/PeerReplayer.h | 1 ceph-16.2.15+ds/src/tools/kvstore_tool.cc | 2 ceph-16.2.15+ds/src/tools/osdmaptool.cc | 5 ceph-16.2.15+ds/src/tools/rados/rados.cc | 20 ceph-16.2.15+ds/src/tools/rbd_mirror/ImageReplayer.cc | 20 ceph-16.2.15+ds/src/tools/rbd_mirror/image_replayer/snapshot/Replayer.cc | 34 ceph-16.2.15+ds/src/tools/rbd_mirror/image_replayer/snapshot/Replayer.h | 3 ceph-16.2.15+ds/src/tools/rbd_nbd/rbd-nbd.cc | 114 ceph-16.2.15+ds/src/vstart.sh | 7 ceph-16.2.15+ds/win32_deps_build.sh | 2 1273 files changed, 43565 insertions(+), 33707 deletions(-) diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/debian/ceph-base.ceph.init: No such file or directory diff: /srv/release.debian.org/tmp/Msf9kTZzsp/ceph-16.2.15+ds/debian/ceph-base.ceph.init: No such file or directory diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/debian/ceph-common.rbdmap.init: No such file or directory diff: /srv/release.debian.org/tmp/Msf9kTZzsp/ceph-16.2.15+ds/debian/ceph-common.rbdmap.init: No such file or directory diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/.qa: No such file or directory diff: /srv/release.debian.org/tmp/Msf9kTZzsp/ceph-16.2.15+ds/qa/.qa: No such file or directory diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/cephfs/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/cephfs/clusters/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/cephfs/conf/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/cephfs/mount/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/cephfs/mount/kclient/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/cephfs/mount/kclient/overrides/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/cephfs/mount/kclient/overrides/distro/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/cephfs/mount/kclient/overrides/distro/stock/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/cephfs/mount/kclient/overrides/distro/testing/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/cephfs/mount/kclient/overrides/distro/testing/flavor/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/cephfs/objectstore-ec/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/cephfs/overrides/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/cephfs/overrides/fuse/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/cephfs/overrides/fuse/default-perm/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/cephfs/tasks/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/distros/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/distros/container-hosts/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/big/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/big/rados-thrash/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/big/rados-thrash/ceph/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/big/rados-thrash/clusters/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/big/rados-thrash/thrashers/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/big/rados-thrash/workloads/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/buildpackages/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/buildpackages/any/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/buildpackages/any/tasks/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/buildpackages/tests/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/buildpackages/tests/tasks/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/ceph-ansible/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/ceph-ansible/smoke/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/ceph-ansible/smoke/basic/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/ceph-ansible/smoke/basic/0-clusters/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/ceph-ansible/smoke/basic/1-distros/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/ceph-ansible/smoke/basic/2-ceph/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/ceph-ansible/smoke/basic/3-config/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/ceph-ansible/smoke/basic/4-tasks/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/ceph-deploy/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/ceph-deploy/cluster/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/ceph-deploy/config/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/ceph-deploy/distros/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/ceph-deploy/python_versions/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/ceph-deploy/tasks/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/cephmetrics/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/cephmetrics/0-clusters/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/cephmetrics/1-distros/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/cephmetrics/2-ceph/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/cephmetrics/3-ceph-config/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/cephmetrics/4-epel/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/cephmetrics/5-containers/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/cephmetrics/6-tasks/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/crimson-rados/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/crimson-rados/basic/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/crimson-rados/basic/clusters/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/crimson-rados/rbd/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/crimson-rados/rbd/clusters/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/crimson-rados/thrash/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/crimson-rados/thrash/0-size-min-size-overrides/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/crimson-rados/thrash/1-pg-log-overrides/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/crimson-rados/thrash/2-recovery-overrides/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/crimson-rados/thrash/clusters/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/crimson-rados/thrash/thrashers/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/crimson-rados/thrash/workloads/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/dummy/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/dummy/all/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/experimental/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/experimental/multimds/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/experimental/multimds/clusters/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/experimental/multimds/tasks/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/32bits/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/32bits/clusters/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/32bits/conf/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/32bits/mount/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/32bits/objectstore-ec/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/32bits/overrides/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/32bits/tasks/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/bugs/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/bugs/client_trim_caps/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/bugs/client_trim_caps/clusters/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/bugs/client_trim_caps/conf/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/bugs/client_trim_caps/objectstore/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/bugs/client_trim_caps/overrides/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/bugs/client_trim_caps/tasks/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/cephadm/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/cephadm/multivolume/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/cephadm/multivolume/2-workload/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/cephadm/multivolume/distro/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/full/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/full/clusters/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/full/conf/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/full/objectstore/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/full/overrides/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/full/tasks/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/functional/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/functional/clusters/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/functional/conf/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/functional/mount/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/functional/mount/kclient/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/functional/mount/kclient/overrides/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/functional/mount/kclient/overrides/distro/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/functional/mount/kclient/overrides/distro/stock/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/functional/mount/kclient/overrides/distro/testing/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/functional/mount/kclient/overrides/distro/testing/flavor/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/functional/objectstore/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/functional/overrides/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/functional/tasks/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/libcephfs/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/libcephfs/clusters/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/libcephfs/conf/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/libcephfs/objectstore/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/libcephfs/overrides/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/libcephfs/tasks/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/libcephfs/tasks/libcephfs/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/mirror/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/mirror/clients/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/mirror/mount/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/mirror/objectstore/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/mirror/overrides/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/mirror/tasks/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/mirror-ha/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/mirror-ha/clients/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/mirror-ha/objectstore/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/mirror-ha/overrides/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/mirror-ha/workloads/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/mixed-clients/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/mixed-clients/clusters/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/mixed-clients/conf/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/mixed-clients/kclient-overrides/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/mixed-clients/kclient-overrides/distro/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/mixed-clients/kclient-overrides/distro/stock/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/mixed-clients/kclient-overrides/distro/testing/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/mixed-clients/kclient-overrides/distro/testing/flavor/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/mixed-clients/objectstore-ec/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/mixed-clients/overrides/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/mixed-clients/tasks/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/multiclient/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/multiclient/clusters/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/multiclient/conf/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/multiclient/distros/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/multiclient/mount/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/multiclient/objectstore-ec/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/multiclient/overrides/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/multiclient/tasks/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/multifs/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/multifs/clusters/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/multifs/conf/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/multifs/mount/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/multifs/mount/kclient/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/multifs/mount/kclient/overrides/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/multifs/mount/kclient/overrides/distro/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/multifs/mount/kclient/overrides/distro/stock/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/multifs/mount/kclient/overrides/distro/testing/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/multifs/mount/kclient/overrides/distro/testing/flavor/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/multifs/objectstore-ec/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/multifs/overrides/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/multifs/tasks/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/permission/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/permission/clusters/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/permission/conf/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/permission/mount/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/permission/objectstore-ec/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/permission/overrides/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/permission/tasks/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/shell/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/shell/clusters/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/shell/conf/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/shell/mount/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/shell/objectstore/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/shell/overrides/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/shell/tasks/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/snaps/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/snaps/clusters/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/snaps/conf/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/snaps/mount/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/snaps/mount/kclient/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/snaps/mount/kclient/overrides/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/snaps/mount/kclient/overrides/distro/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/snaps/mount/kclient/overrides/distro/stock/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/snaps/mount/kclient/overrides/distro/testing/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/snaps/mount/kclient/overrides/distro/testing/flavor/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/snaps/objectstore-ec/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/snaps/overrides/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/snaps/tasks/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/thrash/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/thrash/multifs/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/thrash/multifs/clusters/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/thrash/multifs/conf/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/thrash/multifs/mount/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/thrash/multifs/mount/kclient/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/thrash/multifs/mount/kclient/overrides/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/thrash/multifs/mount/kclient/overrides/distro/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/thrash/multifs/mount/kclient/overrides/distro/stock/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/thrash/multifs/mount/kclient/overrides/distro/testing/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/thrash/multifs/mount/kclient/overrides/distro/testing/flavor/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/thrash/multifs/msgr-failures/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/thrash/multifs/objectstore/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/thrash/multifs/overrides/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/thrash/multifs/tasks/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/thrash/multifs/tasks/1-thrash/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/thrash/multifs/tasks/2-workunit/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/thrash/workloads/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/thrash/workloads/clusters/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/thrash/workloads/conf/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/thrash/workloads/mount/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/thrash/workloads/mount/kclient/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/thrash/workloads/mount/kclient/overrides/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/thrash/workloads/mount/kclient/overrides/distro/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/thrash/workloads/mount/kclient/overrides/distro/stock/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/thrash/workloads/mount/kclient/overrides/distro/testing/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/thrash/workloads/mount/kclient/overrides/distro/testing/flavor/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/thrash/workloads/msgr-failures/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/thrash/workloads/objectstore-ec/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/thrash/workloads/overrides/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/thrash/workloads/ranks/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/thrash/workloads/tasks/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/thrash/workloads/tasks/1-thrash/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/thrash/workloads/tasks/2-workunit/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/thrash/workloads/tasks/2-workunit/fs/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/thrash/workloads/tasks/2-workunit/suites/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/top/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/top/mount/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/top/objectstore/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/top/overrides/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/top/tasks/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/traceless/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/traceless/clusters/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/traceless/conf/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/traceless/mount/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/traceless/objectstore-ec/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/traceless/overrides/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/traceless/tasks/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/traceless/traceless/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/upgrade/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/upgrade/featureful_client/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/upgrade/featureful_client/old_client/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/upgrade/featureful_client/old_client/clusters/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/upgrade/featureful_client/old_client/conf/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/upgrade/featureful_client/old_client/overrides/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/upgrade/featureful_client/old_client/overrides/multimds/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/upgrade/featureful_client/old_client/tasks/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/upgrade/featureful_client/old_client/tasks/3-compat_client/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/upgrade/featureful_client/upgraded_client/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/upgrade/featureful_client/upgraded_client/clusters/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/upgrade/featureful_client/upgraded_client/conf/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/upgrade/featureful_client/upgraded_client/overrides/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/upgrade/featureful_client/upgraded_client/overrides/multimds/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/upgrade/featureful_client/upgraded_client/tasks/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/upgrade/mds_upgrade_sequence/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/upgrade/mds_upgrade_sequence/conf/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/upgrade/mds_upgrade_sequence/overrides/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/upgrade/mds_upgrade_sequence/tasks/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/upgrade/mds_upgrade_sequence/tasks/0-from/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/upgrade/mds_upgrade_sequence/tasks/1-volume/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/upgrade/mds_upgrade_sequence/tasks/1-volume/1-ranks/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/upgrade/mds_upgrade_sequence/tasks/1-volume/2-allow_standby_replay/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/upgrade/nofs/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/upgrade/nofs/conf/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/upgrade/nofs/overrides/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/upgrade/nofs/tasks/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/upgrade/upgraded_client/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/upgrade/upgraded_client/from_nautilus/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/upgrade/upgraded_client/from_nautilus/clusters/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/upgrade/upgraded_client/from_nautilus/conf/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/upgrade/upgraded_client/from_nautilus/overrides/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/upgrade/upgraded_client/from_nautilus/tasks/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/upgrade/volumes/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/upgrade/volumes/import-legacy/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/upgrade/volumes/import-legacy/clusters/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/upgrade/volumes/import-legacy/conf/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/upgrade/volumes/import-legacy/overrides/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/upgrade/volumes/import-legacy/tasks/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/valgrind/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/valgrind/mirror/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/valgrind/mirror/overrides/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/verify/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/verify/clusters/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/verify/conf/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/verify/distro/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/verify/distro/ubuntu/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/verify/mount/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/verify/mount/kclient/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/verify/objectstore-ec/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/verify/overrides/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/verify/ranks/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/verify/tasks/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/verify/validater/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/volumes/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/volumes/clusters/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/volumes/conf/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/volumes/mount/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/volumes/mount/kclient/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/volumes/mount/kclient/overrides/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/volumes/mount/kclient/overrides/distro/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/volumes/mount/kclient/overrides/distro/stock/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/volumes/mount/kclient/overrides/distro/testing/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/volumes/mount/kclient/overrides/distro/testing/flavor/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/volumes/objectstore/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/volumes/overrides/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/volumes/tasks/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/workload/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/workload/clusters/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/workload/conf/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/workload/mount/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/workload/mount/kclient/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/workload/mount/kclient/overrides/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/workload/mount/kclient/overrides/distro/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/workload/mount/kclient/overrides/distro/stock/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/workload/mount/kclient/overrides/distro/testing/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/workload/mount/kclient/overrides/distro/testing/flavor/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/workload/objectstore-ec/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/workload/omap_limit/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/workload/overrides/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/workload/tasks/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/workload/tasks/workunit/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/fs/workload/tasks/workunit/suites/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/hadoop/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/hadoop/basic/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/hadoop/basic/clusters/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/hadoop/basic/distros/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/hadoop/basic/tasks/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/krbd/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/krbd/basic/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/krbd/basic/ceph/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/krbd/basic/clusters/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/krbd/basic/ms_mode/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/krbd/basic/ms_mode/crc$/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/krbd/basic/ms_mode/legacy$/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/krbd/basic/tasks/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/krbd/fsx/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/krbd/fsx/ceph/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/krbd/fsx/clusters/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/krbd/fsx/ms_mode$/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/krbd/fsx/objectstore/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/krbd/fsx/striping/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/krbd/fsx/striping/default/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/krbd/fsx/striping/default/msgr-failures/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/krbd/fsx/striping/fancy/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/krbd/fsx/striping/fancy/msgr-failures/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/krbd/fsx/tasks/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/krbd/ms_modeless/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/krbd/ms_modeless/ceph/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/krbd/ms_modeless/clusters/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/krbd/ms_modeless/tasks/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/krbd/rbd/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/krbd/rbd/clusters/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/krbd/rbd/ms_mode/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/krbd/rbd/ms_mode/crc$/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/krbd/rbd/ms_mode/legacy$/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/krbd/rbd/msgr-failures/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/krbd/rbd/tasks/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/krbd/rbd-nomount/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/krbd/rbd-nomount/clusters/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/krbd/rbd-nomount/install/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/krbd/rbd-nomount/ms_mode/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/krbd/rbd-nomount/ms_mode/crc$/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/krbd/rbd-nomount/ms_mode/legacy$/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/krbd/rbd-nomount/msgr-failures/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/krbd/rbd-nomount/tasks/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/krbd/singleton/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/krbd/singleton/ms_mode$/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/krbd/singleton/msgr-failures/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/krbd/singleton/tasks/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/Msf9kTZzsp/ceph-16.2.15+ds/qa/suites/krbd/singleton-msgr-failures/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/Msf9kTZzsp/ceph-16.2.15+ds/qa/suites/krbd/singleton-msgr-failures/ms_mode$/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/Msf9kTZzsp/ceph-16.2.15+ds/qa/suites/krbd/singleton-msgr-failures/msgr-failures/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/Msf9kTZzsp/ceph-16.2.15+ds/qa/suites/krbd/singleton-msgr-failures/tasks/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/krbd/thrash/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/krbd/thrash/ceph/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/krbd/thrash/clusters/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/krbd/thrash/ms_mode$/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/krbd/thrash/thrashers/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/krbd/thrash/workloads/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/krbd/unmap/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/krbd/unmap/ceph/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/krbd/unmap/clusters/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/krbd/unmap/kernels/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/krbd/unmap/tasks/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/krbd/wac/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/krbd/wac/sysfs/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/krbd/wac/sysfs/ceph/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/krbd/wac/sysfs/clusters/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/krbd/wac/sysfs/tasks/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/krbd/wac/wac/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/krbd/wac/wac/ceph/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/krbd/wac/wac/clusters/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/krbd/wac/wac/tasks/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/krbd/wac/wac/verify/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/marginal/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/marginal/basic/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/marginal/basic/clusters/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/marginal/basic/tasks/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/marginal/fs-misc/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/marginal/fs-misc/clusters/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/marginal/fs-misc/tasks/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/marginal/mds_restart/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/marginal/mds_restart/clusters/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/marginal/mds_restart/tasks/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/marginal/multimds/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/marginal/multimds/clusters/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/marginal/multimds/mounts/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/marginal/multimds/tasks/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/marginal/multimds/thrash/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/mixed-clients/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/mixed-clients/basic/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/mixed-clients/basic/clusters/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/mixed-clients/basic/tasks/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/netsplit/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/orch/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/orch/cephadm/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/orch/cephadm/dashboard/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/orch/cephadm/dashboard/0-distro/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/orch/cephadm/mds_upgrade_sequence/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/orch/cephadm/mds_upgrade_sequence/conf/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/orch/cephadm/mds_upgrade_sequence/overrides/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/orch/cephadm/mds_upgrade_sequence/tasks/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/orch/cephadm/mds_upgrade_sequence/tasks/0-from/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/orch/cephadm/mds_upgrade_sequence/tasks/1-volume/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/orch/cephadm/mds_upgrade_sequence/tasks/1-volume/1-ranks/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/orch/cephadm/mds_upgrade_sequence/tasks/1-volume/2-allow_standby_replay/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/orch/cephadm/mgr-nfs-upgrade/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/orch/cephadm/mgr-nfs-upgrade/0-distro/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/orch/cephadm/orchestrator_cli/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/orch/cephadm/orchestrator_cli/0-random-distro$/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/orch/cephadm/osds/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/orch/cephadm/osds/0-distro/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/orch/cephadm/rbd_iscsi/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/orch/cephadm/rbd_iscsi/base/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/orch/cephadm/rbd_iscsi/cluster/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/orch/cephadm/rbd_iscsi/pool/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/orch/cephadm/rbd_iscsi/workloads/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/orch/cephadm/smoke/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/orch/cephadm/smoke/distro/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/orch/cephadm/smoke-roleless/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/orch/cephadm/smoke-roleless/0-distro/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/orch/cephadm/smoke-singlehost/0-distro$/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/orch/cephadm/thrash/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/orch/cephadm/thrash/0-distro/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/orch/cephadm/thrash/3-tasks/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/orch/cephadm/thrash-old-clients/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/orch/cephadm/thrash-old-clients/0-size-min-size-overrides/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/orch/cephadm/thrash-old-clients/1-install/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/orch/cephadm/thrash-old-clients/backoff/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/orch/cephadm/thrash-old-clients/clusters/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/orch/cephadm/thrash-old-clients/d-balancer/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/orch/cephadm/thrash-old-clients/distro$/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/orch/cephadm/thrash-old-clients/msgr-failures/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/orch/cephadm/thrash-old-clients/thrashers/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/orch/cephadm/thrash-old-clients/workloads/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/orch/cephadm/upgrade/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/orch/cephadm/upgrade/3-upgrade/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/orch/cephadm/with-work/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/orch/cephadm/with-work/0-distro/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/orch/cephadm/with-work/mode/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/orch/cephadm/with-work/tasks/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/orch/cephadm/workunits/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/orch/cephadm/workunits/0-distro/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/Msf9kTZzsp/ceph-16.2.15+ds/qa/suites/orch/cephadm/workunits/task/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/Msf9kTZzsp/ceph-16.2.15+ds/qa/suites/orch/cephadm/workunits/task/test_iscsi_container/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/orch/rook/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/orch/rook/smoke/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/orch/rook/smoke/0-distro/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/perf-basic/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/perf-basic/objectstore/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/perf-basic/settings/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/perf-basic/workloads/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/powercycle/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/powercycle/osd/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/powercycle/osd/clusters/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/powercycle/osd/powercycle/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/powercycle/osd/tasks/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rados/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rados/basic/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rados/basic/clusters/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rados/basic/msgr-failures/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rados/basic/tasks/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rados/cephadm/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rados/cephadm/dashboard/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rados/cephadm/dashboard/0-distro/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rados/cephadm/mds_upgrade_sequence/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rados/cephadm/mds_upgrade_sequence/conf/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rados/cephadm/mds_upgrade_sequence/overrides/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rados/cephadm/mds_upgrade_sequence/tasks/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rados/cephadm/mds_upgrade_sequence/tasks/0-from/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rados/cephadm/mds_upgrade_sequence/tasks/1-volume/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rados/cephadm/mds_upgrade_sequence/tasks/1-volume/1-ranks/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rados/cephadm/mds_upgrade_sequence/tasks/1-volume/2-allow_standby_replay/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rados/cephadm/mgr-nfs-upgrade/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rados/cephadm/mgr-nfs-upgrade/0-distro/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rados/cephadm/orchestrator_cli/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rados/cephadm/orchestrator_cli/0-random-distro$/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rados/cephadm/osds/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rados/cephadm/osds/0-distro/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rados/cephadm/rbd_iscsi/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rados/cephadm/rbd_iscsi/base/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rados/cephadm/rbd_iscsi/cluster/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rados/cephadm/rbd_iscsi/pool/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rados/cephadm/rbd_iscsi/workloads/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rados/cephadm/smoke/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rados/cephadm/smoke/distro/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rados/cephadm/smoke-roleless/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rados/cephadm/smoke-roleless/0-distro/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rados/cephadm/smoke-singlehost/0-distro$/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rados/cephadm/thrash/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rados/cephadm/thrash/0-distro/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rados/cephadm/thrash/3-tasks/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rados/cephadm/thrash-old-clients/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rados/cephadm/thrash-old-clients/0-size-min-size-overrides/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rados/cephadm/thrash-old-clients/1-install/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rados/cephadm/thrash-old-clients/backoff/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rados/cephadm/thrash-old-clients/clusters/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rados/cephadm/thrash-old-clients/d-balancer/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rados/cephadm/thrash-old-clients/distro$/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rados/cephadm/thrash-old-clients/msgr-failures/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rados/cephadm/thrash-old-clients/thrashers/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rados/cephadm/thrash-old-clients/workloads/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rados/cephadm/upgrade/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rados/cephadm/upgrade/3-upgrade/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rados/cephadm/with-work/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rados/cephadm/with-work/0-distro/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rados/cephadm/with-work/mode/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rados/cephadm/with-work/tasks/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rados/cephadm/workunits/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rados/cephadm/workunits/0-distro/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/Msf9kTZzsp/ceph-16.2.15+ds/qa/suites/rados/cephadm/workunits/task/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/Msf9kTZzsp/ceph-16.2.15+ds/qa/suites/rados/cephadm/workunits/task/test_iscsi_container/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rados/dashboard/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rados/dashboard/clusters/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rados/dashboard/debug/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rados/dashboard/tasks/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rados/mgr/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rados/mgr/clusters/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rados/mgr/debug/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rados/mgr/mgr_ttl_cache/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rados/mgr/tasks/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rados/monthrash/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rados/monthrash/clusters/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rados/monthrash/msgr-failures/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rados/monthrash/thrashers/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rados/monthrash/workloads/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rados/multimon/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rados/multimon/clusters/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rados/multimon/msgr-failures/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rados/multimon/tasks/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rados/objectstore/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rados/objectstore/backends/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rados/perf/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rados/perf/objectstore/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rados/perf/settings/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rados/perf/workloads/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rados/rest/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rados/rook/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rados/rook/smoke/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rados/rook/smoke/0-distro/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rados/singleton/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rados/singleton/all/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rados/singleton/all/thrash-rados/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rados/singleton/msgr-failures/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rados/singleton-bluestore/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rados/singleton-bluestore/all/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rados/singleton-bluestore/msgr-failures/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rados/singleton-bluestore/objectstore/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rados/singleton-nomsgr/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rados/singleton-nomsgr/all/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rados/standalone/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rados/standalone/workloads/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rados/thrash/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rados/thrash/0-size-min-size-overrides/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rados/thrash/1-pg-log-overrides/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rados/thrash/2-recovery-overrides/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rados/thrash/3-scrub-overrides/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rados/thrash/backoff/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rados/thrash/clusters/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rados/thrash/d-balancer/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rados/thrash/msgr-failures/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rados/thrash/thrashers/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rados/thrash/workloads/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rados/thrash-erasure-code/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rados/thrash-erasure-code/clusters/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rados/thrash-erasure-code/fast/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rados/thrash-erasure-code/msgr-failures/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rados/thrash-erasure-code/recovery-overrides/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rados/thrash-erasure-code/thrashers/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rados/thrash-erasure-code/workloads/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rados/thrash-erasure-code-big/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rados/thrash-erasure-code-big/cluster/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rados/thrash-erasure-code-big/msgr-failures/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rados/thrash-erasure-code-big/recovery-overrides/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rados/thrash-erasure-code-big/thrashers/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rados/thrash-erasure-code-big/workloads/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rados/thrash-erasure-code-isa/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rados/thrash-erasure-code-isa/arch/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rados/thrash-erasure-code-isa/clusters/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rados/thrash-erasure-code-isa/msgr-failures/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rados/thrash-erasure-code-isa/recovery-overrides/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rados/thrash-erasure-code-isa/thrashers/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rados/thrash-erasure-code-isa/workloads/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rados/thrash-erasure-code-overwrites/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rados/thrash-erasure-code-overwrites/clusters/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rados/thrash-erasure-code-overwrites/fast/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rados/thrash-erasure-code-overwrites/msgr-failures/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rados/thrash-erasure-code-overwrites/recovery-overrides/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rados/thrash-erasure-code-overwrites/thrashers/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rados/thrash-erasure-code-overwrites/workloads/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rados/thrash-erasure-code-shec/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rados/thrash-erasure-code-shec/clusters/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rados/thrash-erasure-code-shec/msgr-failures/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rados/thrash-erasure-code-shec/recovery-overrides/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rados/thrash-erasure-code-shec/thrashers/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rados/thrash-erasure-code-shec/workloads/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rados/upgrade/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rados/upgrade/nautilus-x-singleton/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rados/upgrade/nautilus-x-singleton/0-cluster/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rados/upgrade/nautilus-x-singleton/1-install/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rados/upgrade/nautilus-x-singleton/2-partial-upgrade/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rados/upgrade/nautilus-x-singleton/3-thrash/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rados/upgrade/nautilus-x-singleton/4-workload/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rados/upgrade/nautilus-x-singleton/5-workload/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rados/upgrade/nautilus-x-singleton/8-workload/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rados/valgrind-leaks/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rados/verify/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rados/verify/clusters/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rados/verify/d-thrash/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rados/verify/d-thrash/default/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rados/verify/msgr-failures/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rados/verify/tasks/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rados/verify/validater/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rbd/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rbd/basic/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rbd/basic/base/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rbd/basic/cachepool/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rbd/basic/clusters/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rbd/basic/msgr-failures/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rbd/basic/tasks/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rbd/cli/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rbd/cli/base/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rbd/cli/clusters/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rbd/cli/features/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rbd/cli/msgr-failures/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rbd/cli/pool/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rbd/cli/workloads/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rbd/cli_v1/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rbd/cli_v1/base/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rbd/cli_v1/clusters/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rbd/cli_v1/features/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rbd/cli_v1/msgr-failures/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rbd/cli_v1/pool/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rbd/cli_v1/workloads/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rbd/encryption/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rbd/encryption/cache/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rbd/encryption/clusters/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rbd/encryption/features/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rbd/encryption/msgr-failures/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rbd/encryption/pool/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rbd/encryption/workloads/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rbd/immutable-object-cache/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rbd/immutable-object-cache/clusters/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rbd/immutable-object-cache/workloads/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rbd/iscsi/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rbd/iscsi/base/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rbd/iscsi/cluster/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rbd/iscsi/pool/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rbd/iscsi/workloads/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rbd/librbd/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rbd/librbd/cache/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rbd/librbd/clusters/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rbd/librbd/config/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rbd/librbd/min-compat-client/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rbd/librbd/msgr-failures/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rbd/librbd/pool/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rbd/librbd/workloads/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rbd/maintenance/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rbd/maintenance/base/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rbd/maintenance/clusters/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rbd/maintenance/qemu/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rbd/maintenance/workloads/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rbd/migration/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rbd/migration/1-base/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rbd/migration/2-clusters/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rbd/migration/5-pool/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rbd/mirror/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rbd/mirror/base/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rbd/mirror/clients/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rbd/mirror/cluster/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rbd/mirror/msgr-failures/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rbd/mirror/workloads/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rbd/mirror-thrash/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rbd/mirror-thrash/base/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rbd/mirror-thrash/clients/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rbd/mirror-thrash/cluster/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rbd/mirror-thrash/msgr-failures/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rbd/mirror-thrash/policy/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rbd/mirror-thrash/rbd-mirror/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rbd/mirror-thrash/workloads/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rbd/nbd/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rbd/nbd/base/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rbd/nbd/cluster/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rbd/nbd/msgr-failures/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rbd/nbd/thrashers/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rbd/nbd/workloads/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rbd/pwl-cache/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rbd/pwl-cache/home/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rbd/pwl-cache/home/1-base/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rbd/pwl-cache/home/2-cluster/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rbd/pwl-cache/home/5-cache-mode/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rbd/pwl-cache/home/6-cache-size/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rbd/pwl-cache/home/7-workloads/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rbd/pwl-cache/tmpfs/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rbd/pwl-cache/tmpfs/1-base/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rbd/pwl-cache/tmpfs/2-cluster/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rbd/pwl-cache/tmpfs/5-cache-mode/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rbd/pwl-cache/tmpfs/6-cache-size/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rbd/pwl-cache/tmpfs/7-workloads/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rbd/qemu/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rbd/qemu/cache/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rbd/qemu/clusters/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rbd/qemu/features/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rbd/qemu/msgr-failures/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rbd/qemu/pool/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rbd/qemu/workloads/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rbd/singleton/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rbd/singleton/all/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rbd/singleton-bluestore/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rbd/singleton-bluestore/all/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rbd/singleton-bluestore/objectstore/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rbd/thrash/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rbd/thrash/base/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rbd/thrash/clusters/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rbd/thrash/msgr-failures/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rbd/thrash/thrashers/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rbd/thrash/workloads/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rbd/valgrind/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rbd/valgrind/base/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rbd/valgrind/clusters/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rbd/valgrind/validator/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rbd/valgrind/workloads/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rgw/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rgw/crypt/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rgw/crypt/0-cluster/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rgw/crypt/1-ceph-install/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rgw/crypt/2-kms/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rgw/crypt/3-rgw/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rgw/crypt/4-tests/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rgw/hadoop-s3a/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rgw/hadoop-s3a/clusters/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rgw/hadoop-s3a/hadoop/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rgw/multifs/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rgw/multifs/clusters/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rgw/multifs/frontend/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rgw/multifs/tasks/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rgw/multisite/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rgw/multisite/realms/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rgw/multisite/tasks/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rgw/singleton/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rgw/singleton/all/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rgw/singleton/frontend/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rgw/sts/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rgw/tempest/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rgw/tempest/clusters/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rgw/tempest/tasks/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rgw/thrash/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rgw/thrash/clusters/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rgw/thrash/thrasher/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rgw/thrash/workload/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rgw/tools/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rgw/verify/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rgw/verify/clusters/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rgw/verify/msgr-failures/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rgw/verify/proto/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rgw/verify/tasks/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rgw/verify/validater/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rgw/website/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rgw/website/clusters/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/rgw/website/tasks/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/samba/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/samba/clusters/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/samba/install/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/samba/mount/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/samba/workload/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/smoke/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/smoke/basic/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/smoke/basic/clusters/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/smoke/basic/objectstore/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/smoke/basic/tasks/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/stress/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/stress/bench/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/stress/bench/clusters/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/stress/bench/tasks/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/stress/thrash/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/stress/thrash/clusters/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/stress/thrash/thrashers/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/stress/thrash/workloads/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/teuthology/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/teuthology/buildpackages/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/teuthology/buildpackages/tasks/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/teuthology/ceph/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/teuthology/ceph/clusters/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/teuthology/ceph/tasks/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/teuthology/multi-cluster/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/teuthology/multi-cluster/all/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/teuthology/no-ceph/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/teuthology/no-ceph/clusters/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/teuthology/no-ceph/tasks/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/teuthology/nop/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/teuthology/nop/all/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/teuthology/rgw/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/teuthology/rgw/tasks/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/teuthology/workunits/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/tgt/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/tgt/basic/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/tgt/basic/clusters/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/tgt/basic/msgr-failures/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/tgt/basic/tasks/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/upgrade/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/upgrade/cephfs/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/upgrade/cephfs/featureful_client/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/upgrade/cephfs/featureful_client/old_client/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/upgrade/cephfs/featureful_client/old_client/clusters/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/upgrade/cephfs/featureful_client/old_client/conf/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/upgrade/cephfs/featureful_client/old_client/overrides/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/upgrade/cephfs/featureful_client/old_client/overrides/multimds/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/upgrade/cephfs/featureful_client/old_client/tasks/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/upgrade/cephfs/featureful_client/old_client/tasks/3-compat_client/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/upgrade/cephfs/featureful_client/upgraded_client/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/upgrade/cephfs/featureful_client/upgraded_client/clusters/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/upgrade/cephfs/featureful_client/upgraded_client/conf/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/upgrade/cephfs/featureful_client/upgraded_client/overrides/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/upgrade/cephfs/featureful_client/upgraded_client/overrides/multimds/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/upgrade/cephfs/featureful_client/upgraded_client/tasks/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/upgrade/cephfs/mds_upgrade_sequence/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/upgrade/cephfs/mds_upgrade_sequence/conf/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/upgrade/cephfs/mds_upgrade_sequence/overrides/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/upgrade/cephfs/mds_upgrade_sequence/tasks/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/upgrade/cephfs/mds_upgrade_sequence/tasks/0-from/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/upgrade/cephfs/mds_upgrade_sequence/tasks/1-volume/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/upgrade/cephfs/mds_upgrade_sequence/tasks/1-volume/1-ranks/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/upgrade/cephfs/mds_upgrade_sequence/tasks/1-volume/2-allow_standby_replay/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/upgrade/cephfs/nofs/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/upgrade/cephfs/nofs/conf/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/upgrade/cephfs/nofs/overrides/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/upgrade/cephfs/nofs/tasks/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/upgrade/cephfs/upgraded_client/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/upgrade/cephfs/upgraded_client/from_nautilus/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/upgrade/cephfs/upgraded_client/from_nautilus/clusters/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/upgrade/cephfs/upgraded_client/from_nautilus/conf/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/upgrade/cephfs/upgraded_client/from_nautilus/overrides/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/upgrade/cephfs/upgraded_client/from_nautilus/tasks/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/upgrade/cephfs/volumes/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/upgrade/cephfs/volumes/import-legacy/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/upgrade/cephfs/volumes/import-legacy/clusters/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/upgrade/cephfs/volumes/import-legacy/conf/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/upgrade/cephfs/volumes/import-legacy/overrides/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/upgrade/cephfs/volumes/import-legacy/tasks/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/upgrade/nautilus-x/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/upgrade/nautilus-x/parallel/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/upgrade/nautilus-x/parallel/0-cluster/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/upgrade/nautilus-x/parallel/1-ceph-install/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/upgrade/nautilus-x/parallel/1.1-pg-log-overrides/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/upgrade/nautilus-x/parallel/2-workload/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/upgrade/nautilus-x/parallel/3-upgrade-sequence/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/upgrade/nautilus-x/parallel/5-final-workload/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/upgrade/nautilus-x/parallel/objectstore/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/upgrade/nautilus-x/stress-split/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/upgrade/nautilus-x/stress-split/0-cluster/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/upgrade/nautilus-x/stress-split/1-ceph-install/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/upgrade/nautilus-x/stress-split/1.1-pg-log-overrides/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/upgrade/nautilus-x/stress-split/2-partial-upgrade/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/upgrade/nautilus-x/stress-split/3-thrash/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/upgrade/nautilus-x/stress-split/4-workload/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/upgrade/nautilus-x/stress-split/8-final-workload/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/upgrade/nautilus-x/stress-split/objectstore/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/upgrade/nautilus-x/stress-split-erasure-code/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/upgrade/nautilus-x/stress-split-erasure-code/0-cluster/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/upgrade/nautilus-x/stress-split-erasure-code/1-nautilus-install/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/upgrade/nautilus-x/stress-split-erasure-code/1.1-pg-log-overrides/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/upgrade/nautilus-x/stress-split-erasure-code/2-partial-upgrade/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/upgrade/nautilus-x/stress-split-erasure-code/3-thrash/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/upgrade/nautilus-x/stress-split-erasure-code/3.1-objectstore/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/upgrade/nautilus-x/stress-split-erasure-code/4-ec-workload/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/upgrade/nautilus-x-singleton/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/upgrade/nautilus-x-singleton/0-cluster/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/upgrade/nautilus-x-singleton/1-install/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/upgrade/nautilus-x-singleton/2-partial-upgrade/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/upgrade/nautilus-x-singleton/3-thrash/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/upgrade/nautilus-x-singleton/4-workload/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/upgrade/nautilus-x-singleton/5-workload/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/upgrade/nautilus-x-singleton/8-workload/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/upgrade/octopus-x/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/upgrade/octopus-x/parallel/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/upgrade/octopus-x/parallel/0-distro/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/upgrade/octopus-x/parallel/workload/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/upgrade/octopus-x/parallel-no-cephadm/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/upgrade/octopus-x/parallel-no-cephadm/0-cluster/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/upgrade/octopus-x/parallel-no-cephadm/1-ceph-install/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/upgrade/octopus-x/parallel-no-cephadm/1.1-pg-log-overrides/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/upgrade/octopus-x/parallel-no-cephadm/2-workload/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/upgrade/octopus-x/parallel-no-cephadm/3-upgrade-sequence/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/upgrade/octopus-x/parallel-no-cephadm/5-final-workload/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/upgrade/octopus-x/stress-split/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/upgrade/octopus-x/stress-split/0-distro/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/upgrade/octopus-x/stress-split/2-first-half-tasks/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/upgrade/octopus-x/stress-split-erasure-code-no-cephadm/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/upgrade/octopus-x/stress-split-erasure-code-no-cephadm/0-cluster/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/upgrade/octopus-x/stress-split-erasure-code-no-cephadm/1-octopus-install/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/upgrade/octopus-x/stress-split-erasure-code-no-cephadm/1.1-pg-log-overrides/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/upgrade/octopus-x/stress-split-erasure-code-no-cephadm/2-partial-upgrade/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/upgrade/octopus-x/stress-split-erasure-code-no-cephadm/3-thrash/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/upgrade/octopus-x/stress-split-erasure-code-no-cephadm/3.1-objectstore/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/upgrade/octopus-x/stress-split-erasure-code-no-cephadm/4-ec-workload/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/upgrade/octopus-x/stress-split-no-cephadm/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/upgrade/octopus-x/stress-split-no-cephadm/0-cluster/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/upgrade/octopus-x/stress-split-no-cephadm/1-ceph-install/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/upgrade/octopus-x/stress-split-no-cephadm/1.1-pg-log-overrides/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/upgrade/octopus-x/stress-split-no-cephadm/2-partial-upgrade/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/upgrade/octopus-x/stress-split-no-cephadm/3-thrash/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/upgrade/octopus-x/stress-split-no-cephadm/4-workload/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/upgrade/octopus-x/stress-split-no-cephadm/8-final-workload/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/upgrade/octopus-x/stress-split-no-cephadm/objectstore/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/upgrade-clients/client-upgrade-pacific-quincy/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/upgrade-clients/client-upgrade-pacific-quincy/pacific-client-x/rbd/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/upgrade-clients/client-upgrade-pacific-quincy/pacific-client-x/rbd/0-cluster/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/upgrade-clients/client-upgrade-pacific-quincy/pacific-client-x/rbd/1-install/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/upgrade-clients/client-upgrade-pacific-quincy/pacific-client-x/rbd/2-workload/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/qa/suites/upgrade-clients/client-upgrade-pacific-quincy/pacific-client-x/rbd/supported/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/Msf9kTZzsp/ceph-16.2.15+ds/qa/suites/upgrade-clients/client-upgrade-pacific-reef/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/Msf9kTZzsp/ceph-16.2.15+ds/qa/suites/upgrade-clients/client-upgrade-pacific-reef/pacific-client-x/rbd/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/Msf9kTZzsp/ceph-16.2.15+ds/qa/suites/upgrade-clients/client-upgrade-pacific-reef/pacific-client-x/rbd/0-cluster/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/Msf9kTZzsp/ceph-16.2.15+ds/qa/suites/upgrade-clients/client-upgrade-pacific-reef/pacific-client-x/rbd/1-install/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/Msf9kTZzsp/ceph-16.2.15+ds/qa/suites/upgrade-clients/client-upgrade-pacific-reef/pacific-client-x/rbd/2-workload/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/Msf9kTZzsp/ceph-16.2.15+ds/qa/suites/upgrade-clients/client-upgrade-pacific-reef/pacific-client-x/rbd/supported/.qa: recursive directory loop diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/src/test/debian-jessie/debian/ceph-base.ceph.init: No such file or directory diff: /srv/release.debian.org/tmp/Msf9kTZzsp/ceph-16.2.15+ds/src/test/debian-jessie/debian/ceph-base.ceph.init: No such file or directory diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/src/test/debian-jessie/debian/ceph-common.rbdmap.init: No such file or directory diff: /srv/release.debian.org/tmp/Msf9kTZzsp/ceph-16.2.15+ds/src/test/debian-jessie/debian/ceph-common.rbdmap.init: No such file or directory diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/src/test/ubuntu-16.04/debian/ceph-base.ceph.init: No such file or directory diff: /srv/release.debian.org/tmp/Msf9kTZzsp/ceph-16.2.15+ds/src/test/ubuntu-16.04/debian/ceph-base.ceph.init: No such file or directory diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/src/test/ubuntu-16.04/debian/ceph-common.rbdmap.init: No such file or directory diff: /srv/release.debian.org/tmp/Msf9kTZzsp/ceph-16.2.15+ds/src/test/ubuntu-16.04/debian/ceph-common.rbdmap.init: No such file or directory diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/src/test/ubuntu-18.04/debian/ceph-base.ceph.init: No such file or directory diff: /srv/release.debian.org/tmp/Msf9kTZzsp/ceph-16.2.15+ds/src/test/ubuntu-18.04/debian/ceph-base.ceph.init: No such file or directory diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/src/test/ubuntu-18.04/debian/ceph-common.rbdmap.init: No such file or directory diff: /srv/release.debian.org/tmp/Msf9kTZzsp/ceph-16.2.15+ds/src/test/ubuntu-18.04/debian/ceph-common.rbdmap.init: No such file or directory diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/src/test/ubuntu-20.04/debian/ceph-base.ceph.init: No such file or directory diff: /srv/release.debian.org/tmp/Msf9kTZzsp/ceph-16.2.15+ds/src/test/ubuntu-20.04/debian/ceph-base.ceph.init: No such file or directory diff: /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/src/test/ubuntu-20.04/debian/ceph-common.rbdmap.init: No such file or directory diff: /srv/release.debian.org/tmp/Msf9kTZzsp/ceph-16.2.15+ds/src/test/ubuntu-20.04/debian/ceph-common.rbdmap.init: No such file or directory diff -Nru ceph-16.2.11+ds/CMakeLists.txt ceph-16.2.15+ds/CMakeLists.txt --- ceph-16.2.11+ds/CMakeLists.txt 2023-01-24 20:43:13.000000000 +0000 +++ ceph-16.2.15+ds/CMakeLists.txt 2024-02-26 19:21:09.000000000 +0000 @@ -2,7 +2,7 @@ # remove cmake/modules/FindPython* once 3.12 is required project(ceph - VERSION 16.2.11 + VERSION 16.2.15 LANGUAGES CXX C ASM) foreach(policy diff -Nru ceph-16.2.11+ds/PendingReleaseNotes ceph-16.2.15+ds/PendingReleaseNotes --- ceph-16.2.11+ds/PendingReleaseNotes 2023-01-24 20:43:13.000000000 +0000 +++ ceph-16.2.15+ds/PendingReleaseNotes 2024-02-26 19:21:09.000000000 +0000 @@ -32,6 +32,52 @@ in certain recovery scenarios, e.g., monitor database lost and rebuilt, and the restored file system is expected to have the same ID as before. +>=16.2.15 +---------- +* `ceph config dump --format ` output will display the localized + option names instead of its normalized version. For e.g., + "mgr/prometheus/x/server_port" will be displayed instead of + "mgr/prometheus/server_port". This matches the output of the non pretty-print + formatted version of the command. + +* CEPHFS: MDS evicts clients which are not advancing their request tids which causes + a large buildup of session metadata resulting in the MDS going read-only due to + the RADOS operation exceeding the size threshold. `mds_session_metadata_threshold` + config controls the maximum size that a (encoded) session metadata can grow. + +* RADOS: `get_pool_is_selfmanaged_snaps_mode` C++ API has been deprecated + due to being prone to false negative results. It's safer replacement is + `pool_is_in_selfmanaged_snaps_mode`. + +* RBD: When diffing against the beginning of time (`fromsnapname == NULL`) in + fast-diff mode (`whole_object == true` with `fast-diff` image feature enabled + and valid), diff-iterate is now guaranteed to execute locally if exclusive + lock is available. This brings a dramatic performance improvement for QEMU + live disk synchronization and backup use cases. + +>= 16.2.14 +---------- + +* CEPHFS: After recovering a Ceph File System post following the disaster recovery + procedure, the recovered files under `lost+found` directory can now be deleted. + +* `ceph mgr dump` command now displays the name of the mgr module that + registered a RADOS client in the `name` field added to elements of the + `active_clients` array. Previously, only the address of a module's RADOS + client was shown in the `active_clients` array. + +>=16.2.12 +--------- + +* CEPHFS: Rename the `mds_max_retries_on_remount_failure` option to + `client_max_retries_on_remount_failure` and move it from mds.yaml.in to + mds-client.yaml.in because this option was only used by MDS client from its + birth. + +* `ceph mgr dump` command now outputs `last_failure_osd_epoch` and + `active_clients` fields at the top level. Previously, these fields were + output under `always_on_modules` field. + >=16.2.11 -------- @@ -50,6 +96,69 @@ namespaces was added to RBD in Nautilus 14.2.0 and it has been possible to map and unmap images in namespaces using the `image-spec` syntax since then but the corresponding option available in most other commands was missing. +* RGW: Compression is now supported for objects uploaded with Server-Side Encryption. + When both are enabled, compression is applied before encryption. +* RGW: the "pubsub" functionality for storing bucket notifications inside Ceph + is removed. Together with it, the "pubsub" zone should not be used anymore. + The REST operations, as well as radosgw-admin commands for manipulating + subscriptions, as well as fetching and acking the notifications are removed + as well. + In case that the endpoint to which the notifications are sent maybe down or + disconnected, it is recommended to use persistent notifications to guarantee + the delivery of the notifications. In case the system that consumes the + notifications needs to pull them (instead of the notifications be pushed + to it), an external message bus (e.g. rabbitmq, Kafka) should be used for + that purpose. +* RGW: The serialized format of notification and topics has changed, so that + new/updated topics will be unreadable by old RGWs. We recommend completing + the RGW upgrades before creating or modifying any notification topics. +* RBD: Trailing newline in passphrase files (`` argument in + `rbd encryption format` command and `--encryption-passphrase-file` option + in other commands) is no longer stripped. +* RBD: Support for layered client-side encryption is added. Cloned images + can now be encrypted each with its own encryption format and passphrase, + potentially different from that of the parent image. The efficient + copy-on-write semantics intrinsic to unformatted (regular) cloned images + are retained. +* CEPHFS: Rename the `mds_max_retries_on_remount_failure` option to + `client_max_retries_on_remount_failure` and move it from mds.yaml.in to + mds-client.yaml.in because this option was only used by MDS client from its + birth. +* The `perf dump` and `perf schema` commands are deprecated in favor of new + `counter dump` and `counter schema` commands. These new commands add support + for labeled perf counters and also emit existing unlabeled perf counters. Some + unlabeled perf counters became labeled in this release, with more to follow in + future releases; such converted perf counters are no longer emitted by the + `perf dump` and `perf schema` commands. +* `ceph mgr dump` command now outputs `last_failure_osd_epoch` and + `active_clients` fields at the top level. Previously, these fields were + output under `always_on_modules` field. +* RBD: All rbd-mirror daemon perf counters became labeled and as such are now + emitted only by the new `counter dump` and `counter schema` commands. As part + of the conversion, many also got renamed to better disambiguate journal-based + and snapshot-based mirroring. +* RBD: list-watchers C++ API (`Image::list_watchers`) now clears the passed + `std::list` before potentially appending to it, aligning with the semantics + of the corresponding C API (`rbd_watchers_list`). +* Telemetry: Users who are opted-in to telemetry can also opt-in to + participating in a leaderboard in the telemetry public + dashboards (https://telemetry-public.ceph.com/). Users can now also add a + description of the cluster to publicly appear in the leaderboard. + For more details, see: + https://docs.ceph.com/en/latest/mgr/telemetry/#leaderboard + See a sample report with `ceph telemetry preview`. + Opt-in to telemetry with `ceph telemetry on`. + Opt-in to the leaderboard with + `ceph config set mgr mgr/telemetry/leaderboard true`. + Add leaderboard description with: + `ceph config set mgr mgr/telemetry/leaderboard_description ‘Cluster description’`. +* CEPHFS: After recovering a Ceph File System post following the disaster recovery + procedure, the recovered files under `lost+found` directory can now be deleted. +* core: cache-tiering is now deprecated. +* mgr/snap_schedule: The snap-schedule mgr module now retains one less snapshot + than the number mentioned against the config tunable `mds_max_snaps_per_dir` + so that a new snapshot can be created and retained during the next schedule + run. >=16.2.8 -------- diff -Nru ceph-16.2.11+ds/admin/doc-requirements.txt ceph-16.2.15+ds/admin/doc-requirements.txt --- ceph-16.2.11+ds/admin/doc-requirements.txt 2023-01-24 20:43:13.000000000 +0000 +++ ceph-16.2.15+ds/admin/doc-requirements.txt 2024-02-26 19:21:09.000000000 +0000 @@ -1,4 +1,4 @@ -Sphinx == 4.4.0 +Sphinx == 5.0.2 git+https://github.com/ceph/sphinx-ditaa.git@py3#egg=sphinx-ditaa breathe >= 4.20.0 Jinja2 diff -Nru ceph-16.2.11+ds/ceph.spec ceph-16.2.15+ds/ceph.spec --- ceph-16.2.11+ds/ceph.spec 2023-01-24 20:44:21.000000000 +0000 +++ ceph-16.2.15+ds/ceph.spec 2024-02-26 19:22:07.000000000 +0000 @@ -135,7 +135,7 @@ # main package definition ################################################################################# Name: ceph -Version: 16.2.11 +Version: 16.2.15 Release: 0%{?dist} %if 0%{?fedora} || 0%{?rhel} Epoch: 2 @@ -151,7 +151,7 @@ Group: System/Filesystems %endif URL: http://ceph.com/ -Source0: %{?_remote_tarball_prefix}ceph-16.2.11.tar.bz2 +Source0: %{?_remote_tarball_prefix}ceph-16.2.15.tar.bz2 %if 0%{?suse_version} # _insert_obs_source_lines_here ExclusiveArch: x86_64 aarch64 ppc64le s390x @@ -1208,7 +1208,7 @@ # common ################################################################################# %prep -%autosetup -p1 -n ceph-16.2.11 +%autosetup -p1 -n ceph-16.2.15 %build # Disable lto on systems that do not support symver attribute @@ -1398,7 +1398,7 @@ chmod 0600 %{buildroot}%{_sharedstatedir}/cephadm/.ssh/authorized_keys # firewall templates and /sbin/mount.ceph symlink -%if 0%{?suse_version} && !0%{?usrmerged} +%if 0%{?suse_version} && 0%{?suse_version} < 1550 mkdir -p %{buildroot}/sbin ln -sf %{_sbindir}/mount.ceph %{buildroot}/sbin/mount.ceph %endif @@ -1577,7 +1577,7 @@ %{_bindir}/rbd-replay-many %{_bindir}/rbdmap %{_sbindir}/mount.ceph -%if 0%{?suse_version} && !0%{?usrmerged} +%if 0%{?suse_version} && 0%{?suse_version} < 1550 /sbin/mount.ceph %endif %if %{with lttng} diff -Nru ceph-16.2.11+ds/ceph.spec.in ceph-16.2.15+ds/ceph.spec.in --- ceph-16.2.11+ds/ceph.spec.in 2023-01-24 20:43:13.000000000 +0000 +++ ceph-16.2.15+ds/ceph.spec.in 2024-02-26 19:21:09.000000000 +0000 @@ -1398,7 +1398,7 @@ chmod 0600 %{buildroot}%{_sharedstatedir}/cephadm/.ssh/authorized_keys # firewall templates and /sbin/mount.ceph symlink -%if 0%{?suse_version} && !0%{?usrmerged} +%if 0%{?suse_version} && 0%{?suse_version} < 1550 mkdir -p %{buildroot}/sbin ln -sf %{_sbindir}/mount.ceph %{buildroot}/sbin/mount.ceph %endif @@ -1577,7 +1577,7 @@ %{_bindir}/rbd-replay-many %{_bindir}/rbdmap %{_sbindir}/mount.ceph -%if 0%{?suse_version} && !0%{?usrmerged} +%if 0%{?suse_version} && 0%{?suse_version} < 1550 /sbin/mount.ceph %endif %if %{with lttng} diff -Nru ceph-16.2.11+ds/cmake/modules/BuildRocksDB.cmake ceph-16.2.15+ds/cmake/modules/BuildRocksDB.cmake --- ceph-16.2.11+ds/cmake/modules/BuildRocksDB.cmake 2023-01-24 20:43:13.000000000 +0000 +++ ceph-16.2.15+ds/cmake/modules/BuildRocksDB.cmake 2024-02-26 19:21:09.000000000 +0000 @@ -56,12 +56,13 @@ endif() include(CheckCXXCompilerFlag) check_cxx_compiler_flag("-Wno-deprecated-copy" HAS_WARNING_DEPRECATED_COPY) + set(rocksdb_CXX_FLAGS "${CMAKE_CXX_FLAGS}") if(HAS_WARNING_DEPRECATED_COPY) - set(rocksdb_CXX_FLAGS -Wno-deprecated-copy) + string(APPEND rocksdb_CXX_FLAGS " -Wno-deprecated-copy") endif() check_cxx_compiler_flag("-Wno-pessimizing-move" HAS_WARNING_PESSIMIZING_MOVE) if(HAS_WARNING_PESSIMIZING_MOVE) - set(rocksdb_CXX_FLAGS "${rocksdb_CXX_FLAGS} -Wno-pessimizing-move") + string(APPEND rocksdb_CXX_FLAGS " -Wno-pessimizing-move") endif() if(rocksdb_CXX_FLAGS) list(APPEND rocksdb_CMAKE_ARGS -DCMAKE_CXX_FLAGS='${rocksdb_CXX_FLAGS}') diff -Nru ceph-16.2.11+ds/debian/changelog ceph-16.2.15+ds/debian/changelog --- ceph-16.2.11+ds/debian/changelog 2023-02-16 10:54:41.000000000 +0000 +++ ceph-16.2.15+ds/debian/changelog 2024-12-04 05:46:17.000000000 +0000 @@ -1,3 +1,18 @@ +ceph (16.2.15+ds-0+deb12u1) bookworm-security; urgency=medium + + * Adding myself to uploaders. + * Updating watch file for ceph 16. + * Merging upstream version 16.2.15: + - 16.2.12: Fix rgw bucket validation against POST policies + [CVE-2023-43040] + * Refreshing 32bit-fixes.patch. + * Removing bug1917414.patch, included upstream. + * Removing patches for CVE-2022-3650, included upstream. + * Cherry-picking patch from upstream to fix authentication bypass in rgw + (Closes: #1088993) [CVE-2024-48916]. + + -- Daniel Baumann Wed, 04 Dec 2024 06:46:17 +0100 + ceph (16.2.11+ds-2) unstable; urgency=medium * Add missing python3-distutils runtime depends in ceph-common. diff -Nru ceph-16.2.11+ds/debian/control ceph-16.2.15+ds/debian/control --- ceph-16.2.11+ds/debian/control 2023-02-16 10:54:41.000000000 +0000 +++ ceph-16.2.15+ds/debian/control 2024-12-04 05:46:17.000000000 +0000 @@ -7,6 +7,7 @@ Gaudenz Steinlin , Bernd Zeimetz , Thomas Goirand , + Daniel Baumann , Build-Depends: cmake, cython3, diff -Nru ceph-16.2.11+ds/debian/patches/32bit-fixes.patch ceph-16.2.15+ds/debian/patches/32bit-fixes.patch --- ceph-16.2.11+ds/debian/patches/32bit-fixes.patch 2023-02-16 10:54:41.000000000 +0000 +++ ceph-16.2.15+ds/debian/patches/32bit-fixes.patch 2024-12-04 05:46:17.000000000 +0000 @@ -34,9 +34,9 @@ - root_obj["syncing_snapshot_timestamp"] = remote_snap_info->timestamp.sec(); + root_obj["syncing_snapshot_timestamp"] = static_cast( + remote_snap_info->timestamp.sec()); - root_obj["syncing_percent"] = static_cast( - 100 * m_local_mirror_snap_ns.last_copied_object_number / - static_cast(std::max(1U, m_local_object_count))); + + if (m_local_object_count > 0) { + root_obj["syncing_percent"] = Index: ceph/src/common/buffer.cc =================================================================== --- ceph.orig/src/common/buffer.cc @@ -140,15 +140,15 @@ =================================================================== --- ceph.orig/src/librbd/object_map/DiffRequest.cc +++ ceph/src/librbd/object_map/DiffRequest.cc -@@ -175,7 +175,7 @@ void DiffRequest::handle_load_object_ - m_object_map.resize(num_objs); +@@ -94,7 +94,7 @@ int DiffRequest::process_object_map(c } + uint64_t start_object_no, end_object_no; - size_t prev_object_diff_state_size = m_object_diff_state->size(); + uint64_t prev_object_diff_state_size = m_object_diff_state->size(); - if (prev_object_diff_state_size < num_objs) { - // the diff state should be the largest of all snapshots in the set - m_object_diff_state->resize(num_objs); + if (is_diff_iterate()) { + start_object_no = std::min(m_start_object_no, num_objs); + end_object_no = std::min(m_end_object_no, num_objs); Index: ceph/src/SimpleRADOSStriper.cc =================================================================== --- ceph.orig/src/SimpleRADOSStriper.cc diff -Nru ceph-16.2.11+ds/debian/patches/CVE-2022-3650_1_ceph-crash_drop_privleges_to_run_as_ceph_user_rather_than_root.patch ceph-16.2.15+ds/debian/patches/CVE-2022-3650_1_ceph-crash_drop_privleges_to_run_as_ceph_user_rather_than_root.patch --- ceph-16.2.11+ds/debian/patches/CVE-2022-3650_1_ceph-crash_drop_privleges_to_run_as_ceph_user_rather_than_root.patch 2023-02-16 10:54:41.000000000 +0000 +++ ceph-16.2.15+ds/debian/patches/CVE-2022-3650_1_ceph-crash_drop_privleges_to_run_as_ceph_user_rather_than_root.patch 1970-01-01 00:00:00.000000000 +0000 @@ -1,65 +0,0 @@ -Description: CVE-2022-3650: ceph-crash: drop privleges to run as "ceph" user, rather than root - If privileges cannot be dropped, log an error and exit. This commit - also catches and logs exceptions when scraping the crash path, without - which ceph-crash would just exit if it encountered an error. -Author: Tim Serong -Date: Wed, 2 Nov 2022 14:27:47 +1100 -Bug: https://tracker.ceph.com/issues/57967 -Signed-off-by: Tim Serong -Origin: upstream, https://github.com/ceph/ceph/commit/130c9626598bc3a75942161e6cce7c664c447382 -Bug-Debian: https://bugs.debian.org/1024932 -Last-Update: 2022-11-28 - -Index: ceph/src/ceph-crash.in -=================================================================== ---- ceph.orig/src/ceph-crash.in -+++ ceph/src/ceph-crash.in -@@ -3,8 +3,10 @@ - # vim: ts=4 sw=4 smarttab expandtab - - import argparse -+import grp - import logging - import os -+import pwd - import signal - import socket - import subprocess -@@ -83,8 +85,25 @@ def handler(signum): - print('*** Interrupted with signal %d ***' % signum) - sys.exit(0) - -+def drop_privs(): -+ if os.getuid() == 0: -+ try: -+ ceph_uid = pwd.getpwnam("ceph").pw_uid -+ ceph_gid = grp.getgrnam("ceph").gr_gid -+ os.setgroups([]) -+ os.setgid(ceph_gid) -+ os.setuid(ceph_uid) -+ except Exception as e: -+ log.error(f"Unable to drop privileges: {e}") -+ sys.exit(1) -+ -+ - def main(): - global auth_names -+ -+ # run as unprivileged ceph user -+ drop_privs() -+ - # exit code 0 on SIGINT, SIGTERM - signal.signal(signal.SIGINT, handler) - signal.signal(signal.SIGTERM, handler) -@@ -103,7 +122,10 @@ def main(): - - log.info("monitoring path %s, delay %ds" % (args.path, args.delay * 60.0)) - while True: -- scrape_path(args.path) -+ try: -+ scrape_path(args.path) -+ except Exception as e: -+ log.error(f"Error scraping {args.path}: {e}") - if args.delay == 0: - sys.exit(0) - time.sleep(args.delay * 60) diff -Nru ceph-16.2.11+ds/debian/patches/CVE-2022-3650_2_ceph-crash_fix_stderr_handling.patch ceph-16.2.15+ds/debian/patches/CVE-2022-3650_2_ceph-crash_fix_stderr_handling.patch --- ceph-16.2.11+ds/debian/patches/CVE-2022-3650_2_ceph-crash_fix_stderr_handling.patch 2023-02-16 10:54:41.000000000 +0000 +++ ceph-16.2.15+ds/debian/patches/CVE-2022-3650_2_ceph-crash_fix_stderr_handling.patch 1970-01-01 00:00:00.000000000 +0000 @@ -1,26 +0,0 @@ -Description: CVE-2022-3650: ceph-crash: fix stderr handling - Popen.communicate() returns a tuple (stdout, stderr), and stderr - will be of type bytes, hence the need to decode it before checking - if it's an empty string or not. -Author: Tim Serong -Date: Wed, 2 Nov 2022 14:23:20 +1100 -Bug: a77b47eeeb5770eeefcf4619ab2105ee7a6a003e -Signed-off-by: Tim Serong -Bug-Debian: https://bugs.debian.org/1024932 -Origin: upstream, https://github.com/ceph/ceph/commit/45915540559126a652f8d9d105723584cfc63439 -Last-Update: 2022-11-28 - -diff --git a/src/ceph-crash.in b/src/ceph-crash.in -index 0fffd59a96df5..e2a7be59da701 100755 ---- a/src/ceph-crash.in -+++ b/src/ceph-crash.in -@@ -50,7 +50,8 @@ def post_crash(path): - stderr=subprocess.PIPE, - ) - f = open(os.path.join(path, 'meta'), 'rb') -- stderr = pr.communicate(input=f.read()) -+ (_, stderr) = pr.communicate(input=f.read()) -+ stderr = stderr.decode() - rc = pr.wait() - f.close() - if rc != 0 or stderr != "": diff -Nru ceph-16.2.11+ds/debian/patches/CVE-2024-48916.patch ceph-16.2.15+ds/debian/patches/CVE-2024-48916.patch --- ceph-16.2.11+ds/debian/patches/CVE-2024-48916.patch 1970-01-01 00:00:00.000000000 +0000 +++ ceph-16.2.15+ds/debian/patches/CVE-2024-48916.patch 2024-12-04 05:46:17.000000000 +0000 @@ -0,0 +1,28 @@ +From 919da3696668a07c6810dfa39301950c81c2eba4 Mon Sep 17 00:00:00 2001 +From: Pritha Srivastava +Date: Tue, 5 Nov 2024 12:03:00 +0530 +Subject: [PATCH] [CVE-2024-48916] rgw/sts: fix to disallow unsupported JWT + algorithms while authenticating AssumeRoleWithWebIdentity using JWT obtained + from an external IDP. + +fixes: https://tracker.ceph.com/issues/68836 + +Signed-off-by: Pritha Srivastava +--- + src/rgw/rgw_rest_sts.cc | 3 +++ + 1 file changed, 3 insertions(+) + +diff --git a/src/rgw/rgw_rest_sts.cc b/src/rgw/rgw_rest_sts.cc +index f2bd9429a5538..1101da0af3cca 100644 +--- a/src/rgw/rgw_rest_sts.cc ++++ b/src/rgw/rgw_rest_sts.cc +@@ -436,6 +436,9 @@ WebTokenEngine::validate_signature(const DoutPrefixProvider* dpp, const jwt::dec + .allow_algorithm(jwt::algorithm::ps512{cert}); + + verifier.verify(decoded); ++ } else { ++ ldpp_dout(dpp, 0) << "Unsupported algorithm: " << algorithm << dendl; ++ throw -EINVAL; + } + } catch (std::runtime_error& e) { + ldpp_dout(dpp, 0) << "Signature validation failed: " << e.what() << dendl; diff -Nru ceph-16.2.11+ds/debian/patches/bug1917414.patch ceph-16.2.15+ds/debian/patches/bug1917414.patch --- ceph-16.2.11+ds/debian/patches/bug1917414.patch 2023-02-16 10:54:41.000000000 +0000 +++ ceph-16.2.15+ds/debian/patches/bug1917414.patch 1970-01-01 00:00:00.000000000 +0000 @@ -1,143 +0,0 @@ -From db463aa139aa0f3eb996062bd7c65f0d10a7932b Mon Sep 17 00:00:00 2001 -From: luo rixin -Date: Fri, 8 Jan 2021 16:16:02 +0800 -Subject: [PATCH] src/isa-l/erasure_code: Fix text relocation on aarch64 - -Here is the bug report on ceph. https://tracker.ceph.com/issues/48681 - -Signed-off-by: luo rixin ---- - src/isa-l/erasure_code/aarch64/gf_2vect_mad_neon.S | 5 +++-- - src/isa-l/erasure_code/aarch64/gf_3vect_mad_neon.S | 5 +++-- - src/isa-l/erasure_code/aarch64/gf_4vect_mad_neon.S | 5 +++-- - src/isa-l/erasure_code/aarch64/gf_5vect_mad_neon.S | 5 +++-- - src/isa-l/erasure_code/aarch64/gf_6vect_mad_neon.S | 5 +++-- - src/isa-l/erasure_code/aarch64/gf_vect_mad_neon.S | 5 +++-- - 6 files changed, 18 insertions(+), 12 deletions(-) - ---- a/src/isa-l/erasure_code/aarch64/gf_2vect_mad_neon.S -+++ b/src/isa-l/erasure_code/aarch64/gf_2vect_mad_neon.S -@@ -360,7 +360,8 @@ gf_2vect_mad_neon: - sub x_dest1, x_dest1, x_tmp - sub x_dest2, x_dest2, x_tmp - -- ldr x_const, =const_tbl -+ adrp x_const, const_tbl -+ add x_const, x_const, :lo12:const_tbl - sub x_const, x_const, x_tmp - ldr q_tmp, [x_const, #16] - -@@ -394,7 +395,7 @@ gf_2vect_mad_neon: - mov w_ret, #1 - ret - --.section .data -+.section .rodata - .balign 8 - const_tbl: - .dword 0x0000000000000000, 0x0000000000000000 ---- a/src/isa-l/erasure_code/aarch64/gf_3vect_mad_neon.S -+++ b/src/isa-l/erasure_code/aarch64/gf_3vect_mad_neon.S -@@ -332,7 +332,8 @@ gf_3vect_mad_neon: - sub x_dest2, x_dest2, x_tmp - sub x_dest3, x_dest3, x_tmp - -- ldr x_const, =const_tbl -+ adrp x_const, const_tbl -+ add x_const, x_const, :lo12:const_tbl - sub x_const, x_const, x_tmp - ldr q_tmp, [x_const, #16] - -@@ -374,7 +375,7 @@ gf_3vect_mad_neon: - mov w_ret, #1 - ret - --.section .data -+.section .rodata - .balign 8 - const_tbl: - .dword 0x0000000000000000, 0x0000000000000000 ---- a/src/isa-l/erasure_code/aarch64/gf_4vect_mad_neon.S -+++ b/src/isa-l/erasure_code/aarch64/gf_4vect_mad_neon.S -@@ -397,7 +397,8 @@ gf_4vect_mad_neon: - sub x_dest3, x_dest3, x_tmp - sub x_dest4, x_dest4, x_tmp - -- ldr x_const, =const_tbl -+ adrp x_const, const_tbl -+ add x_const, x_const, :lo12:const_tbl - sub x_const, x_const, x_tmp - ldr q_tmp, [x_const, #16] - -@@ -448,7 +449,7 @@ gf_4vect_mad_neon: - mov w_ret, #1 - ret - --.section .data -+.section .rodata - .balign 8 - const_tbl: - .dword 0x0000000000000000, 0x0000000000000000 ---- a/src/isa-l/erasure_code/aarch64/gf_5vect_mad_neon.S -+++ b/src/isa-l/erasure_code/aarch64/gf_5vect_mad_neon.S -@@ -463,7 +463,8 @@ gf_5vect_mad_neon: - sub x_dest4, x_dest4, x_tmp - sub x_dest5, x_dest5, x_tmp - -- ldr x_const, =const_tbl -+ adrp x_const, const_tbl -+ add x_const, x_const, :lo12:const_tbl - sub x_const, x_const, x_tmp - ldr q_tmp, [x_const, #16] - -@@ -527,7 +528,7 @@ gf_5vect_mad_neon: - mov w_ret, #1 - ret - --.section .data -+.section .rodata - .balign 8 - const_tbl: - .dword 0x0000000000000000, 0x0000000000000000 ---- a/src/isa-l/erasure_code/aarch64/gf_6vect_mad_neon.S -+++ b/src/isa-l/erasure_code/aarch64/gf_6vect_mad_neon.S -@@ -526,7 +526,8 @@ gf_6vect_mad_neon: - sub x_dest5, x_dest5, x_tmp - sub x_dest6, x_dest6, x_tmp - -- ldr x_const, =const_tbl -+ adrp x_const, const_tbl -+ add x_const, x_const, :lo12:const_tbl - sub x_const, x_const, x_tmp - ldr q_tmp, [x_const, #16] - -@@ -602,7 +603,7 @@ gf_6vect_mad_neon: - mov w_ret, #1 - ret - --.section .data -+.section .rodata - .balign 8 - const_tbl: - .dword 0x0000000000000000, 0x0000000000000000 ---- a/src/isa-l/erasure_code/aarch64/gf_vect_mad_neon.S -+++ b/src/isa-l/erasure_code/aarch64/gf_vect_mad_neon.S -@@ -281,7 +281,8 @@ gf_vect_mad_neon: - mov x_src, x_src_end - sub x_dest1, x_dest1, x_tmp - -- ldr x_const, =const_tbl -+ adrp x_const, const_tbl -+ add x_const, x_const, :lo12:const_tbl - sub x_const, x_const, x_tmp - ldr q_tmp, [x_const, #16] - -@@ -307,7 +308,7 @@ gf_vect_mad_neon: - mov w_ret, #1 - ret - --.section .data -+.section .rodata - .balign 8 - const_tbl: - .dword 0x0000000000000000, 0x0000000000000000 diff -Nru ceph-16.2.11+ds/debian/patches/series ceph-16.2.15+ds/debian/patches/series --- ceph-16.2.11+ds/debian/patches/series 2023-02-16 10:54:41.000000000 +0000 +++ ceph-16.2.15+ds/debian/patches/series 2024-12-04 05:46:17.000000000 +0000 @@ -15,11 +15,9 @@ fix-ceph-osd-systemd-target.patch compile-ppc.c-on-all-powerpc-machines.patch bug1914584.patch -bug1917414.patch cmake-test-for-16-bytes-atomic-support-on-mips-also.patch only-yied-under-armv7-and-above.patch Fix-build-with-fmt-8-9.patch fix-CheckCxxAtomic-riscv64.patch -CVE-2022-3650_1_ceph-crash_drop_privleges_to_run_as_ceph_user_rather_than_root.patch -CVE-2022-3650_2_ceph-crash_fix_stderr_handling.patch CVE-2022-3854_1_rgw_Guard_against_malformed_bucket_URLs.patch +CVE-2024-48916.patch diff -Nru ceph-16.2.11+ds/debian/watch ceph-16.2.15+ds/debian/watch --- ceph-16.2.11+ds/debian/watch 2023-02-16 10:54:41.000000000 +0000 +++ ceph-16.2.15+ds/debian/watch 2024-12-04 05:46:17.000000000 +0000 @@ -3,4 +3,4 @@ repack,compression=xz,\ uversionmangle=s/-/~/,\ dversionmangle=s/\+(debian|dfsg|ds|deb)(\.\d+)?$// \ - http://download.ceph.com/tarballs/ceph-(\d.*)\.tar\.gz + https://download.ceph.com/tarballs/ceph-(16.\d.*)\.tar\.gz diff -Nru ceph-16.2.11+ds/doc/architecture.rst ceph-16.2.15+ds/doc/architecture.rst --- ceph-16.2.11+ds/doc/architecture.rst 2023-01-24 20:43:13.000000000 +0000 +++ ceph-16.2.15+ds/doc/architecture.rst 2024-02-26 19:21:09.000000000 +0000 @@ -199,6 +199,8 @@ .. index:: architecture; high availability authentication +.. _arch_high_availability_authentication: + High Availability Authentication ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ diff -Nru ceph-16.2.11+ds/doc/ceph-volume/lvm/activate.rst ceph-16.2.15+ds/doc/ceph-volume/lvm/activate.rst --- ceph-16.2.11+ds/doc/ceph-volume/lvm/activate.rst 2023-01-24 20:43:13.000000000 +0000 +++ ceph-16.2.15+ds/doc/ceph-volume/lvm/activate.rst 2024-02-26 19:21:09.000000000 +0000 @@ -2,7 +2,7 @@ ``activate`` ============ - + Once :ref:`ceph-volume-lvm-prepare` is completed, and all the various steps that entails are done, the volume is ready to get "activated". @@ -13,7 +13,7 @@ .. note:: The execution of this call is fully idempotent, and there is no side-effects when running multiple times -For OSDs deployed by cephadm, please refer to :ref:cephadm-osd-activate: +For OSDs deployed by cephadm, please refer to :ref:`cephadm-osd-activate` instead. New OSDs @@ -29,7 +29,7 @@ Activating all OSDs ------------------- -.. note:: For OSDs deployed by cephadm, please refer to :ref:cephadm-osd-activate: +.. note:: For OSDs deployed by cephadm, please refer to :ref:`cephadm-osd-activate` instead. It is possible to activate all existing OSDs at once by using the ``--all`` diff -Nru ceph-16.2.11+ds/doc/ceph-volume/lvm/encryption.rst ceph-16.2.15+ds/doc/ceph-volume/lvm/encryption.rst --- ceph-16.2.11+ds/doc/ceph-volume/lvm/encryption.rst 2023-01-24 20:43:13.000000000 +0000 +++ ceph-16.2.15+ds/doc/ceph-volume/lvm/encryption.rst 2024-02-26 19:21:09.000000000 +0000 @@ -4,45 +4,41 @@ ========== Logical volumes can be encrypted using ``dmcrypt`` by specifying the -``--dmcrypt`` flag when creating OSDs. Encryption can be done in different ways, -specially with LVM. ``ceph-volume`` is somewhat opinionated with the way it -sets up encryption with logical volumes so that the process is consistent and +``--dmcrypt`` flag when creating OSDs. When using LVM, logical volumes can be +encrypted in different ways. ``ceph-volume`` does not offer as many options as +LVM does, but it encrypts logical volumes in a way that is consistent and robust. -In this case, ``ceph-volume lvm`` follows these constraints: +In this case, ``ceph-volume lvm`` follows this constraint: -* only LUKS (version 1) is used -* Logical Volumes are encrypted, while their underlying PVs (physical volumes) - aren't -* Non-LVM devices like partitions are also encrypted with the same OSD key +* Non-LVM devices (such as partitions) are encrypted with the same OSD key. LUKS ---- -There are currently two versions of LUKS, 1 and 2. Version 2 is a bit easier -to implement but not widely available in all distros Ceph supports. LUKS 1 is -not going to be deprecated in favor of LUKS 2, so in order to have as wide -support as possible, ``ceph-volume`` uses LUKS version 1. +There are currently two versions of LUKS, 1 and 2. Version 2 is a bit easier to +implement but not widely available in all Linux distributions supported by +Ceph. -.. note:: Version 1 of LUKS is just referenced as "LUKS" whereas version 2 is - referred to as LUKS2 +.. note:: Version 1 of LUKS is referred to in this documentation as "LUKS". + Version 2 is of LUKS is referred to in this documentation as "LUKS2". LUKS on LVM ----------- -Encryption is done on top of existing logical volumes (unlike encrypting the -physical device). Any single logical volume can be encrypted while other -volumes can remain unencrypted. This method also allows for flexible logical +Encryption is done on top of existing logical volumes (this is not the same as +encrypting the physical device). Any single logical volume can be encrypted, +leaving other volumes unencrypted. This method also allows for flexible logical volume setups, since encryption will happen once the LV is created. Workflow -------- -When setting up the OSD, a secret key will be created, that will be passed -along to the monitor in JSON format as ``stdin`` to prevent the key from being +When setting up the OSD, a secret key is created. That secret key is passed +to the monitor in JSON format as ``stdin`` to prevent the key from being captured in the logs. -The JSON payload looks something like:: +The JSON payload looks something like this:: { "cephx_secret": CEPHX_SECRET, @@ -51,36 +47,38 @@ } The naming convention for the keys is **strict**, and they are named like that -for the hardcoded (legacy) names ceph-disk used. +for the hardcoded (legacy) names used by ceph-disk. * ``cephx_secret`` : The cephx key used to authenticate * ``dmcrypt_key`` : The secret (or private) key to unlock encrypted devices * ``cephx_lockbox_secret`` : The authentication key used to retrieve the ``dmcrypt_key``. It is named *lockbox* because ceph-disk used to have an - unencrypted partition named after it, used to store public keys and other - OSD metadata. + unencrypted partition named after it, which was used to store public keys and + other OSD metadata. The naming convention is strict because Monitors supported the naming -convention by ceph-disk, which used these key names. In order to keep -compatibility and prevent ceph-disk from breaking, ceph-volume will use the same -naming convention *although they don't make sense for the new encryption +convention of ceph-disk, which used these key names. In order to maintain +compatibility and prevent ceph-disk from breaking, ceph-volume uses the same +naming convention *although it does not make sense for the new encryption workflow*. -After the common steps of setting up the OSD during the prepare stage, either -with :term:`filestore` or :term:`bluestore`, the logical volume is left ready -to be activated, regardless of the state of the device (encrypted or decrypted). +After the common steps of setting up the OSD during the "prepare stage" (either +with :term:`filestore` or :term:`bluestore`), the logical volume is left ready +to be activated, regardless of the state of the device (encrypted or +decrypted). -At activation time, the logical volume will get decrypted and the OSD started -once the process completes correctly. +At the time of its activation, the logical volume is decrypted. The OSD starts +after the process completes correctly. -Summary of the encryption workflow for creating a new OSD: +Summary of the encryption workflow for creating a new OSD +---------------------------------------------------------- -#. OSD is created, both lockbox and dmcrypt keys are created, and sent along - with JSON to the monitors, indicating an encrypted OSD. +#. OSD is created. Both lockbox and dmcrypt keys are created and sent to the + monitors in JSON format, indicating an encrypted OSD. #. All complementary devices (like journal, db, or wal) get created and encrypted with the same OSD key. Key is stored in the LVM metadata of the - OSD + OSD. #. Activation continues by ensuring devices are mounted, retrieving the dmcrypt - secret key from the monitors and decrypting before the OSD gets started. + secret key from the monitors, and decrypting before the OSD gets started. diff -Nru ceph-16.2.11+ds/doc/cephadm/compatibility.rst ceph-16.2.15+ds/doc/cephadm/compatibility.rst --- ceph-16.2.11+ds/doc/cephadm/compatibility.rst 2023-01-24 20:43:13.000000000 +0000 +++ ceph-16.2.15+ds/doc/cephadm/compatibility.rst 2024-02-26 19:21:09.000000000 +0000 @@ -11,20 +11,28 @@ Podman and Ceph have different end-of-life strategies. This means that care must be taken in finding a version of Podman that is compatible with Ceph. -These versions are expected to work: +This table shows which version pairs are expected to work or not work together: -+-----------+---------------------------------------+ -| Ceph | Podman | -+-----------+-------+-------+-------+-------+-------+ -| | 1.9 | 2.0 | 2.1 | 2.2 | 3.0 | -+===========+=======+=======+=======+=======+=======+ -| <= 15.2.5 | True | False | False | False | False | -+-----------+-------+-------+-------+-------+-------+ -| >= 15.2.6 | True | True | True | False | False | -+-----------+-------+-------+-------+-------+-------+ -| >= 16.2.1 | False | True | True | False | True | -+-----------+-------+-------+-------+-------+-------+ ++-----------+-----------------------------------------------+ +| Ceph | Podman | ++-----------+-------+-------+-------+-------+-------+-------+ +| | 1.9 | 2.0 | 2.1 | 2.2 | 3.0 | > 3.0 | ++===========+=======+=======+=======+=======+=======+=======+ +| <= 15.2.5 | True | False | False | False | False | False | ++-----------+-------+-------+-------+-------+-------+-------+ +| >= 15.2.6 | True | True | True | False | False | False | ++-----------+-------+-------+-------+-------+-------+-------+ +| >= 16.2.1 | False | True | True | False | True | True | ++-----------+-------+-------+-------+-------+-------+-------+ +| >= 17.2.0 | False | True | True | False | True | True | ++-----------+-------+-------+-------+-------+-------+-------+ + +.. note:: + + While not all podman versions have been actively tested against + all Ceph versions, there are no known issues with using podman + version 3.0 or greater with Ceph Quincy and later releases. .. warning:: @@ -41,17 +49,17 @@ Stability --------- -Cephadm is under development. Some functionality is incomplete. Be aware -that some of the components of Ceph may not work perfectly with cephadm. -These include: - -- RGW +Cephadm is relatively stable but new functionality is still being +added and bugs are occasionally discovered. If issues are found, please +open a tracker issue under the Orchestrator component (https://tracker.ceph.com/projects/orchestrator/issues) Cephadm support remains under development for the following features: -- Ingress -- Cephadm exporter daemon -- cephfs-mirror +- ceph-exporter deployment +- stretch mode integration +- monitoring stack (moving towards prometheus service discover and providing TLS) +- RGW multisite deployment support (requires lots of manual steps currently) +- cephadm agent If a cephadm command fails or a service stops running properly, see :ref:`cephadm-pause` for instructions on how to pause the Ceph cluster's diff -Nru ceph-16.2.11+ds/doc/cephadm/host-management.rst ceph-16.2.15+ds/doc/cephadm/host-management.rst --- ceph-16.2.11+ds/doc/cephadm/host-management.rst 2023-01-24 20:43:13.000000000 +0000 +++ ceph-16.2.15+ds/doc/cephadm/host-management.rst 2024-02-26 19:21:09.000000000 +0000 @@ -245,9 +245,10 @@ hostname: node-02 addr: 192.168.0.12 -This can be combined with service specifications (below) to create a cluster spec -file to deploy a whole cluster in one command. see ``cephadm bootstrap --apply-spec`` -also to do this during bootstrap. Cluster SSH Keys must be copied to hosts prior to adding them. +This can be combined with :ref:`service specifications` +to create a cluster spec file to deploy a whole cluster in one command. see +``cephadm bootstrap --apply-spec`` also to do this during bootstrap. Cluster +SSH Keys must be copied to hosts prior to adding them. Setting the initial CRUSH location of host ========================================== diff -Nru ceph-16.2.11+ds/doc/cephadm/install.rst ceph-16.2.15+ds/doc/cephadm/install.rst --- ceph-16.2.11+ds/doc/cephadm/install.rst 2023-01-24 20:43:13.000000000 +0000 +++ ceph-16.2.15+ds/doc/cephadm/install.rst 2024-02-26 19:21:09.000000000 +0000 @@ -48,8 +48,8 @@ ----------------------- * Use ``curl`` to fetch the most recent version of the - standalone script. - + standalone script. + .. prompt:: bash # :substitutions: @@ -148,7 +148,7 @@ Run the ``ceph bootstrap`` command: -.. prompt:: bash # +.. prompt:: bash # cephadm bootstrap --mon-ip ** @@ -167,11 +167,11 @@ with this label will (also) get a copy of ``/etc/ceph/ceph.conf`` and ``/etc/ceph/ceph.client.admin.keyring``. -Further information about cephadm bootstrap +Further information about cephadm bootstrap ------------------------------------------- The default bootstrap behavior will work for most users. But if you'd like -immediately to know more about ``cephadm bootstrap``, read the list below. +immediately to know more about ``cephadm bootstrap``, read the list below. Also, you can run ``cephadm bootstrap -h`` to see all of ``cephadm``'s available options. @@ -210,20 +210,20 @@ EOF $ ./cephadm bootstrap --config initial-ceph.conf ... -* The ``--ssh-user **`` option makes it possible to choose which SSH +* The ``--ssh-user **`` option makes it possible to choose which SSH user cephadm will use to connect to hosts. The associated SSH key will be - added to ``/home/**/.ssh/authorized_keys``. The user that you + added to ``/home/**/.ssh/authorized_keys``. The user that you designate with this option must have passwordless sudo access. * If you are using a container on an authenticated registry that requires login, you may add the argument: - * ``--registry-json `` + * ``--registry-json `` example contents of JSON file with login info:: {"url":"REGISTRY_URL", "username":"REGISTRY_USERNAME", "password":"REGISTRY_PASSWORD"} - + Cephadm will attempt to log in to this registry so it can pull your container and then store the login info in its config database. Other hosts added to the cluster will then also be able to make use of the authenticated registry. @@ -272,7 +272,7 @@ Confirm that the ``ceph`` command is accessible with: .. prompt:: bash # - + ceph -v @@ -292,7 +292,7 @@ are maintained in ``/etc/ceph`` on all hosts with the ``_admin`` label, which is initially applied only to the bootstrap host. We usually recommend that one or more other hosts be given the ``_admin`` label so that the Ceph CLI (e.g., via ``cephadm shell``) is easily -accessible on multiple hosts. To add the ``_admin`` label to additional host(s), +accessible on multiple hosts. To add the ``_admin`` label to additional host(s): .. prompt:: bash # @@ -310,8 +310,8 @@ Adding Storage ============== -To add storage to the cluster, either tell Ceph to consume any -available and unused device: +To add storage to the cluster, you can tell Ceph to consume any +available and unused device(s): .. prompt:: bash # @@ -406,7 +406,7 @@ insecure registry. #. Push your container image to your local registry. Here are some acceptable - kinds of container images: + kinds of container images: * Ceph container image. See :ref:`containers`. * Prometheus container image diff -Nru ceph-16.2.11+ds/doc/cephadm/operations.rst ceph-16.2.15+ds/doc/cephadm/operations.rst --- ceph-16.2.11+ds/doc/cephadm/operations.rst 2023-01-24 20:43:13.000000000 +0000 +++ ceph-16.2.15+ds/doc/cephadm/operations.rst 2024-02-26 19:21:09.000000000 +0000 @@ -43,17 +43,17 @@ Ceph daemon logs ================ -Logging to journald -------------------- +Logging to stdout +----------------- -Ceph daemons traditionally write logs to ``/var/log/ceph``. Ceph daemons log to -journald by default and Ceph logs are captured by the container runtime -environment. They are accessible via ``journalctl``. +Ceph daemons traditionally write logs to ``/var/log/ceph``. Ceph +daemons log to stderr by default and Ceph logs are captured by the +container runtime environment. By default, most systems send these +logs to journald, which means that they are accessible via +``journalctl``. -.. note:: Prior to Quincy, ceph daemons logged to stderr. - -Example of logging to journald -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +Example of logging to stdout +~~~~~~~~~~~~~~~~~~~~~~~~~~~~ For example, to view the logs for the daemon ``mon.foo`` for a cluster with ID ``5c5a50ae-272a-455d-99e9-32c6a013e694``, the command would be @@ -69,11 +69,11 @@ ---------------- You can also configure Ceph daemons to log to files instead of to -journald if you prefer logs to appear in files (as they did in earlier, +stderr if you prefer logs to appear in files (as they did in earlier, pre-cephadm, pre-Octopus versions of Ceph). When Ceph logs to files, the logs appear in ``/var/log/ceph/``. If you choose to -configure Ceph to log to files instead of to journald, remember to -configure Ceph so that it will not log to journald (the commands for +configure Ceph to log to files instead of to stderr, remember to +configure Ceph so that it will not log to stderr (the commands for this are covered below). Enabling logging to files @@ -86,10 +86,10 @@ ceph config set global log_to_file true ceph config set global mon_cluster_log_to_file true -Disabling logging to journald -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +Disabling logging to stderr +~~~~~~~~~~~~~~~~~~~~~~~~~~~ -If you choose to log to files, we recommend disabling logging to journald or else +If you choose to log to files, we recommend disabling logging to stderr or else everything will be logged twice. Run the following commands to disable logging to stderr: @@ -97,11 +97,6 @@ ceph config set global log_to_stderr false ceph config set global mon_cluster_log_to_stderr false - ceph config set global log_to_journald false - ceph config set global mon_cluster_log_to_journald false - -.. note:: You can change the default by passing --log-to-file during - bootstrapping a new cluster. Modifying the log retention schedule ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ diff -Nru ceph-16.2.11+ds/doc/cephadm/services/index.rst ceph-16.2.15+ds/doc/cephadm/services/index.rst --- ceph-16.2.11+ds/doc/cephadm/services/index.rst 2023-01-24 20:43:13.000000000 +0000 +++ ceph-16.2.15+ds/doc/cephadm/services/index.rst 2024-02-26 19:21:09.000000000 +0000 @@ -496,11 +496,20 @@ If there are fewer hosts selected by the placement specification than demanded by ``count``, cephadm will deploy only on the selected hosts. +.. _cephadm-extra-container-args: + Extra Container Arguments ========================= .. warning:: - The arguments provided for extra container args are limited to whatever arguments are available for a `run` command from whichever container engine you are using. Providing any arguments the `run` command does not support (or invalid values for arguments) will cause the daemon to fail to start. + The arguments provided for extra container args are limited to whatever arguments are available for + a `run` command from whichever container engine you are using. Providing any arguments the `run` + command does not support (or invalid values for arguments) will cause the daemon to fail to start. + +.. note:: + + For arguments passed to the process running inside the container rather than the for + the container runtime itself, see :ref:`cephadm-extra-entrypoint-args` Cephadm supports providing extra miscellaneous container arguments for @@ -544,6 +553,82 @@ - "-v" - "/opt/ceph_cert/host.cert:/etc/grafana/certs/cert_file:ro" +.. _cephadm-extra-entrypoint-args: + +Extra Entrypoint Arguments +========================== + + +.. note:: + + For arguments intended for the container runtime rather than the process inside + it, see :ref:`cephadm-extra-container-args` + +Similar to extra container args for the container runtime, Cephadm supports +appending to args passed to the entrypoint process running +within a container. For example, to set the collector textfile directory for +the node-exporter service , one could apply a service spec like + +.. code-block:: yaml + + service_type: node-exporter + service_name: node-exporter + placement: + host_pattern: '*' + extra_entrypoint_args: + - "--collector.textfile.directory=/var/lib/node_exporter/textfile_collector2" + +Custom Config Files +=================== + +Cephadm supports specifying miscellaneous config files for daemons. +To do so, users must provide both the content of the config file and the +location within the daemon's container at which it should be mounted. After +applying a YAML spec with custom config files specified and having cephadm +redeploy the daemons for which the config files are specified, these files will +be mounted within the daemon's container at the specified location. + +Example service spec: + +.. code-block:: yaml + + service_type: grafana + service_name: grafana + custom_configs: + - mount_path: /etc/example.conf + content: | + setting1 = value1 + setting2 = value2 + - mount_path: /usr/share/grafana/example.cert + content: | + -----BEGIN PRIVATE KEY----- + V2VyIGRhcyBsaWVzdCBpc3QgZG9vZi4gTG9yZW0gaXBzdW0gZG9sb3Igc2l0IGFt + ZXQsIGNvbnNldGV0dXIgc2FkaXBzY2luZyBlbGl0ciwgc2VkIGRpYW0gbm9udW15 + IGVpcm1vZCB0ZW1wb3IgaW52aWR1bnQgdXQgbGFib3JlIGV0IGRvbG9yZSBtYWdu + YSBhbGlxdXlhbSBlcmF0LCBzZWQgZGlhbSB2b2x1cHR1YS4gQXQgdmVybyBlb3Mg + ZXQgYWNjdXNhbSBldCBqdXN0byBkdW8= + -----END PRIVATE KEY----- + -----BEGIN CERTIFICATE----- + V2VyIGRhcyBsaWVzdCBpc3QgZG9vZi4gTG9yZW0gaXBzdW0gZG9sb3Igc2l0IGFt + ZXQsIGNvbnNldGV0dXIgc2FkaXBzY2luZyBlbGl0ciwgc2VkIGRpYW0gbm9udW15 + IGVpcm1vZCB0ZW1wb3IgaW52aWR1bnQgdXQgbGFib3JlIGV0IGRvbG9yZSBtYWdu + YSBhbGlxdXlhbSBlcmF0LCBzZWQgZGlhbSB2b2x1cHR1YS4gQXQgdmVybyBlb3Mg + ZXQgYWNjdXNhbSBldCBqdXN0byBkdW8= + -----END CERTIFICATE----- + +To make these new config files actually get mounted within the +containers for the daemons + +.. prompt:: bash + + ceph orch redeploy + +For example: + +.. prompt:: bash + + ceph orch redeploy grafana + .. _orch-rm: Removing a Service diff -Nru ceph-16.2.11+ds/doc/cephadm/services/monitoring.rst ceph-16.2.15+ds/doc/cephadm/services/monitoring.rst --- ceph-16.2.11+ds/doc/cephadm/services/monitoring.rst 2023-01-24 20:43:13.000000000 +0000 +++ ceph-16.2.15+ds/doc/cephadm/services/monitoring.rst 2024-02-26 19:21:09.000000000 +0000 @@ -299,13 +299,16 @@ Setting up Prometheus ----------------------- -Setting Prometheus Retention Time -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +Setting Prometheus Retention Size and Time +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -Cephadm provides the option to set the Prometheus TDSB retention time using -a ``retention_time`` field in the Prometheus service spec. The value defaults -to 15 days (15d). If you would like a different value, such as 1 year (1y) you -can apply a service spec similar to: +Cephadm can configure Prometheus TSDB retention by specifying ``retention_time`` +and ``retention_size`` values in the Prometheus service spec. +The retention time value defaults to 15 days (15d). Users can set a different value/unit where +supported units are: 'y', 'w', 'd', 'h', 'm' and 's'. The retention size value defaults +to 0 (disabled). Supported units in this case are: 'B', 'KB', 'MB', 'GB', 'TB', 'PB' and 'EB'. + +In the following example spec we set the retention time to 1 year and the size to 1GB. .. code-block:: yaml @@ -314,6 +317,7 @@ count: 1 spec: retention_time: "1y" + retention_size: "1GB" .. note:: diff -Nru ceph-16.2.11+ds/doc/cephadm/services/osd.rst ceph-16.2.15+ds/doc/cephadm/services/osd.rst --- ceph-16.2.11+ds/doc/cephadm/services/osd.rst 2023-01-24 20:43:13.000000000 +0000 +++ ceph-16.2.15+ds/doc/cephadm/services/osd.rst 2024-02-26 19:21:09.000000000 +0000 @@ -308,7 +308,7 @@ .. prompt:: bash # - orch osd rm --replace [--force] + ceph orch osd rm --replace [--force] Example: diff -Nru ceph-16.2.11+ds/doc/cephadm/services/rgw.rst ceph-16.2.15+ds/doc/cephadm/services/rgw.rst --- ceph-16.2.11+ds/doc/cephadm/services/rgw.rst 2023-01-24 20:43:13.000000000 +0000 +++ ceph-16.2.15+ds/doc/cephadm/services/rgw.rst 2024-02-26 19:21:09.000000000 +0000 @@ -164,8 +164,10 @@ deploy and manage a combination of haproxy and keepalived to provide load balancing on a floating virtual IP. -If SSL is used, then SSL must be configured and terminated by the ingress service -and not RGW itself. +If the RGW service is configured with SSL enabled, then the ingress service +will use the `ssl` and `verify none` options in the backend configuration. +Trust verification is disabled because the backends are accessed by IP +address instead of FQDN. .. image:: ../../images/HAProxy_for_RGW.svg @@ -186,8 +188,7 @@ Prerequisites ------------- -* An existing RGW service, without SSL. (If you want SSL service, the certificate - should be configured on the ingress service, not the RGW service.) +* An existing RGW service. Deploying --------- diff -Nru ceph-16.2.11+ds/doc/cephadm/troubleshooting.rst ceph-16.2.15+ds/doc/cephadm/troubleshooting.rst --- ceph-16.2.11+ds/doc/cephadm/troubleshooting.rst 2023-01-24 20:43:13.000000000 +0000 +++ ceph-16.2.15+ds/doc/cephadm/troubleshooting.rst 2024-02-26 19:21:09.000000000 +0000 @@ -1,22 +1,19 @@ Troubleshooting =============== -You might need to investigate why a cephadm command failed +You may wish to investigate why a cephadm command failed or why a certain service no longer runs properly. -Cephadm deploys daemons as containers. This means that -troubleshooting those containerized daemons might work -differently than you expect (and that is certainly true if -you expect this troubleshooting to work the way that -troubleshooting does when the daemons involved aren't -containerized). +Cephadm deploys daemons within containers. This means that +troubleshooting those containerized daemons will require +a different process than traditional package-install daemons. Here are some tools and commands to help you troubleshoot your Ceph environment. .. _cephadm-pause: -Pausing or disabling cephadm +Pausing or Disabling cephadm ---------------------------- If something goes wrong and cephadm is behaving badly, you can @@ -45,16 +42,15 @@ individual services. -Per-service and per-daemon events +Per-service and Per-daemon Events --------------------------------- -In order to help with the process of debugging failed daemon -deployments, cephadm stores events per service and per daemon. +In order to facilitate debugging failed daemons, +cephadm stores events per service and per daemon. These events often contain information relevant to -troubleshooting -your Ceph cluster. +troubleshooting your Ceph cluster. -Listing service events +Listing Service Events ~~~~~~~~~~~~~~~~~~~~~~ To see the events associated with a certain service, run a @@ -82,7 +78,7 @@ - '2021-02-01T12:09:25.264584 service:alertmanager [ERROR] "Failed to apply: Cannot place on unknown_host: Unknown hosts"' -Listing daemon events +Listing Daemon Events ~~~~~~~~~~~~~~~~~~~~~ To see the events associated with a certain daemon, run a @@ -106,16 +102,16 @@ mds.cephfs.hostname.ppdhsz on host 'hostname'" -Checking cephadm logs +Checking Cephadm Logs --------------------- -To learn how to monitor the cephadm logs as they are generated, read :ref:`watching_cephadm_logs`. +To learn how to monitor cephadm logs as they are generated, read :ref:`watching_cephadm_logs`. -If your Ceph cluster has been configured to log events to files, there will exist a -cephadm log file called ``ceph.cephadm.log`` on all monitor hosts (see -:ref:`cephadm-logs` for a more complete explanation of this). +If your Ceph cluster has been configured to log events to files, there will be a +``ceph.cephadm.log`` file on all monitor hosts (see +:ref:`cephadm-logs` for a more complete explanation). -Gathering log files +Gathering Log Files ------------------- Use journalctl to gather the log files of all daemons: @@ -140,7 +136,7 @@ cephadm logs --fsid --name "$name" > $name; done -Collecting systemd status +Collecting Systemd Status ------------------------- To print the state of a systemd unit, run:: @@ -156,7 +152,7 @@ done -List all downloaded container images +List all Downloaded Container Images ------------------------------------ To list all container images that are downloaded on a host: @@ -170,16 +166,16 @@ "registry.opensuse.org/opensuse/leap:15.2" -Manually running containers +Manually Running Containers --------------------------- -Cephadm writes small wrappers that run a containers. Refer to +Cephadm uses small wrappers when running containers. Refer to ``/var/lib/ceph///unit.run`` for the container execution command. .. _cephadm-ssh-errors: -SSH errors +SSH Errors ---------- Error message:: @@ -191,7 +187,7 @@ Please make sure that the host is reachable and accepts connections using the cephadm SSH key ... -Things users can do: +Things Ceph administrators can do: 1. Ensure cephadm has an SSH identity key:: @@ -224,7 +220,7 @@ [root@mon1 ~]# cephadm shell -- ceph cephadm get-pub-key > ~/ceph.pub [root@mon1 ~]# grep "`cat ~/ceph.pub`" /root/.ssh/authorized_keys -Failed to infer CIDR network error +Failed to Infer CIDR network error ---------------------------------- If you see this error:: @@ -241,7 +237,7 @@ For more detail on operations of this kind, see :ref:`deploy_additional_monitors` -Accessing the admin socket +Accessing the Admin Socket -------------------------- Each Ceph daemon provides an admin socket that bypasses the @@ -252,12 +248,12 @@ [root@mon1 ~]# cephadm enter --name [ceph: root@mon1 /]# ceph --admin-daemon /var/run/ceph/ceph-.asok config show -Calling miscellaneous ceph tools +Running Various Ceph Tools -------------------------------- -To call miscellaneous like ``ceph-objectstore-tool`` or -``ceph-monstore-tool``, you can run them by calling -``cephadm shell --name `` like so:: +To run Ceph tools like ``ceph-objectstore-tool`` or +``ceph-monstore-tool``, invoke the cephadm CLI with +``cephadm shell --name ``. For example:: root@myhostname # cephadm unit --name mon.myhostname stop root@myhostname # cephadm shell --name mon.myhostname @@ -272,21 +268,21 @@ election_strategy: 1 0: [v2:127.0.0.1:3300/0,v1:127.0.0.1:6789/0] mon.myhostname -This command sets up the environment in a way that is suitable -for extended daemon maintenance and running the deamon interactively. +The cephadm shell sets up the environment in a way that is suitable +for extended daemon maintenance and running daemons interactively. .. _cephadm-restore-quorum: -Restoring the MON quorum ------------------------- +Restoring the Monitor Quorum +---------------------------- -In case the Ceph MONs cannot form a quorum, cephadm is not able -to manage the cluster, until the quorum is restored. +If the Ceph monitor daemons (mons) cannot form a quorum, cephadm will not be +able to manage the cluster until quorum is restored. -In order to restore the MON quorum, remove unhealthy MONs +In order to restore the quorum, remove unhealthy monitors form the monmap by following these steps: -1. Stop all MONs. For each MON host:: +1. Stop all mons. For each mon host:: ssh {mon-host} cephadm unit --name mon.`hostname` stop @@ -301,18 +297,19 @@ .. _cephadm-manually-deploy-mgr: -Manually deploying a MGR daemon -------------------------------- -cephadm requires a MGR daemon in order to manage the cluster. In case the cluster -the last MGR of a cluster was removed, follow these steps in order to deploy -a MGR ``mgr.hostname.smfvfd`` on a random host of your cluster manually. +Manually Deploying a Manager Daemon +----------------------------------- +At least one manager (mgr) daemon is required by cephadm in order to manage the +cluster. If the last mgr in a cluster has been removed, follow these steps in +order to deploy a manager called (for example) +``mgr.hostname.smfvfd`` on a random host of your cluster manually. Disable the cephadm scheduler, in order to prevent cephadm from removing the new -MGR. See :ref:`cephadm-enable-cli`:: +manager. See :ref:`cephadm-enable-cli`:: ceph config-key set mgr/cephadm/pause true -Then get or create the auth entry for the new MGR:: +Then get or create the auth entry for the new manager:: ceph auth get-or-create mgr.hostname.smfvfd mon "profile mgr" osd "allow *" mds "allow *" @@ -338,26 +335,26 @@ cephadm --image deploy --fsid --name mgr.hostname.smfvfd --config-json config-json.json -Analyzing core dumps +Analyzing Core Dumps --------------------- -In case a Ceph daemon crashes, cephadm supports analyzing core dumps. To enable core dumps, run +When a Ceph daemon crashes, cephadm supports analyzing core dumps. To enable core dumps, run .. prompt:: bash # ulimit -c unlimited -core dumps will now be written to ``/var/lib/systemd/coredump``. +Core dumps will now be written to ``/var/lib/systemd/coredump``. .. note:: - core dumps are not namespaced by the kernel, which means + Core dumps are not namespaced by the kernel, which means they will be written to ``/var/lib/systemd/coredump`` on the container host. -Now, wait for the crash to happen again. (To simulate the crash of a daemon, run e.g. ``killall -3 ceph-mon``) +Now, wait for the crash to happen again. To simulate the crash of a daemon, run e.g. ``killall -3 ceph-mon``. -Install debug packages by entering the cephadm shell and install ``ceph-debuginfo``:: +Install debug packages including ``ceph-debuginfo`` by entering the cephadm shelll:: # cephadm shell --mount /var/lib/systemd/coredump [ceph: root@host1 /]# dnf install ceph-debuginfo gdb zstd diff -Nru ceph-16.2.11+ds/doc/cephfs/administration.rst ceph-16.2.15+ds/doc/cephfs/administration.rst --- ceph-16.2.11+ds/doc/cephfs/administration.rst 2023-01-24 20:43:13.000000000 +0000 +++ ceph-16.2.15+ds/doc/cephfs/administration.rst 2024-02-26 19:21:09.000000000 +0000 @@ -15,7 +15,7 @@ :: - fs new + ceph fs new This command creates a new file system. The file system name and metadata pool name are self-explanatory. The specified data pool is the default data pool and @@ -25,13 +25,13 @@ :: - fs ls + ceph fs ls List all file systems by name. :: - fs dump [epoch] + ceph fs dump [epoch] This dumps the FSMap at the given epoch (default: current) which includes all file system settings, MDS daemons and the ranks they hold, and the list of @@ -40,7 +40,7 @@ :: - fs rm [--yes-i-really-mean-it] + ceph fs rm [--yes-i-really-mean-it] Destroy a CephFS file system. This wipes information about the state of the file system from the FSMap. The metadata pool and data pools are untouched and @@ -48,28 +48,28 @@ :: - fs get + ceph fs get Get information about the named file system, including settings and ranks. This -is a subset of the same information from the ``fs dump`` command. +is a subset of the same information from the ``ceph fs dump`` command. :: - fs set + ceph fs set Change a setting on a file system. These settings are specific to the named file system and do not affect other file systems. :: - fs add_data_pool + ceph fs add_data_pool Add a data pool to the file system. This pool can be used for file layouts as an alternate location to store file data. :: - fs rm_data_pool + ceph fs rm_data_pool This command removes the specified pool from the list of data pools for the file system. If any files have layouts for the removed data pool, the file @@ -82,7 +82,7 @@ :: - fs set max_file_size + ceph fs set max_file_size CephFS has a configurable maximum file size, and it's 1TB by default. You may wish to set this limit higher if you expect to store large files @@ -116,13 +116,13 @@ :: - fs set down true + ceph fs set down true To bring the cluster back online: :: - fs set down false + ceph fs set down false This will also restore the previous value of max_mds. MDS daemons are brought down in a way such that journals are flushed to the metadata pool and all @@ -133,11 +133,11 @@ ----------------------------------------------------------------- To allow rapidly deleting a file system (for testing) or to quickly bring the -file system and MDS daemons down, use the ``fs fail`` command: +file system and MDS daemons down, use the ``ceph fs fail`` command: :: - fs fail + ceph fs fail This command sets a file system flag to prevent standbys from activating on the file system (the ``joinable`` flag). @@ -146,7 +146,7 @@ :: - fs set joinable false + ceph fs set joinable false Then the operator can fail all of the ranks which causes the MDS daemons to respawn as standbys. The file system will be left in a degraded state. @@ -154,7 +154,7 @@ :: # For all ranks, 0-N: - mds fail : + ceph mds fail : Once all ranks are inactive, the file system may also be deleted or left in this state for other purposes (perhaps disaster recovery). @@ -163,7 +163,7 @@ :: - fs set joinable true + ceph fs set joinable true Daemons @@ -182,34 +182,35 @@ :: - mds fail + ceph mds fail Mark an MDS daemon as failed. This is equivalent to what the cluster would do if an MDS daemon had failed to send a message to the mon for ``mds_beacon_grace`` second. If the daemon was active and a suitable -standby is available, using ``mds fail`` will force a failover to the standby. +standby is available, using ``ceph mds fail`` will force a failover to the +standby. -If the MDS daemon was in reality still running, then using ``mds fail`` +If the MDS daemon was in reality still running, then using ``ceph mds fail`` will cause the daemon to restart. If it was active and a standby was available, then the "failed" daemon will return as a standby. :: - tell mds. command ... + ceph tell mds. command ... Send a command to the MDS daemon(s). Use ``mds.*`` to send a command to all daemons. Use ``ceph tell mds.* help`` to learn available commands. :: - mds metadata + ceph mds metadata Get metadata about the given MDS known to the Monitors. :: - mds repaired + ceph mds repaired Mark the file system rank as repaired. Unlike the name suggests, this command does not change a MDS; it manipulates the file system rank which has been @@ -228,14 +229,14 @@ :: - fs required_client_features add reply_encoding - fs required_client_features rm reply_encoding + ceph fs required_client_features add reply_encoding + ceph fs required_client_features rm reply_encoding To list all CephFS features :: - fs feature ls + ceph fs feature ls Clients that are missing newly added features will be evicted automatically. @@ -330,7 +331,7 @@ :: - fs flag set [] + ceph fs flag set [] Sets a global CephFS flag (i.e. not specific to a particular file system). Currently, the only flag setting is 'enable_multiple' which allows having @@ -352,13 +353,13 @@ :: - mds rmfailed + ceph mds rmfailed This removes a rank from the failed set. :: - fs reset + ceph fs reset This command resets the file system state to defaults, except for the name and pools. Non-zero ranks are saved in the stopped set. @@ -366,7 +367,7 @@ :: - fs new --fscid --force + ceph fs new --fscid --force This command creates a file system with a specific **fscid** (file system cluster ID). You may want to do this when an application expects the file system's ID to be diff -Nru ceph-16.2.11+ds/doc/cephfs/cephfs-mirroring.rst ceph-16.2.15+ds/doc/cephfs/cephfs-mirroring.rst --- ceph-16.2.11+ds/doc/cephfs/cephfs-mirroring.rst 2023-01-24 20:43:13.000000000 +0000 +++ ceph-16.2.15+ds/doc/cephfs/cephfs-mirroring.rst 2024-02-26 19:21:09.000000000 +0000 @@ -14,6 +14,8 @@ The primary (local) and secondary (remote) Ceph clusters version should be Pacific or later. +.. _cephfs_mirroring_creating_users: + Creating Users -------------- @@ -42,80 +44,155 @@ $ cephfs-mirror --id mirror --cluster site-a -f -.. note:: User used here is `mirror` created in the `Creating Users` section. +.. note:: The user specified here is `mirror`, the creation of which is + described in the :ref:`Creating Users` + section. + +Multiple ``cephfs-mirror`` daemons may be deployed for concurrent +synchronization and high availability. Mirror daemons share the synchronization +load using a simple ``M/N`` policy, where ``M`` is the number of directories +and ``N`` is the number of ``cephfs-mirror`` daemons. + +When ``cephadm`` is used to manage a Ceph cluster, ``cephfs-mirror`` daemons can be +deployed by running the following command: + +.. prompt:: bash $ + + ceph orch apply cephfs-mirror + +To deploy multiple mirror daemons, run a command of the following form: + +.. prompt:: bash $ + + ceph orch apply cephfs-mirror --placement= + +For example, to deploy 3 `cephfs-mirror` daemons on different hosts, run a command of the following form: + +.. prompt:: bash $ + + $ ceph orch apply cephfs-mirror --placement="3 host1,host2,host3" Interface --------- -`Mirroring` module (manager plugin) provides interfaces for managing directory snapshot -mirroring. Manager interfaces are (mostly) wrappers around monitor commands for managing -file system mirroring and is the recommended control interface. +The `Mirroring` module (manager plugin) provides interfaces for managing +directory snapshot mirroring. These are (mostly) wrappers around monitor +commands for managing file system mirroring and is the recommended control +interface. Mirroring Module ---------------- -The mirroring module is responsible for assigning directories to mirror daemons for -synchronization. Multiple mirror daemons can be spawned to achieve concurrency in -directory snapshot synchronization. When mirror daemons are spawned (or terminated) -, the mirroring module discovers the modified set of mirror daemons and rebalances -the directory assignment amongst the new set thus providing high-availability. +The mirroring module is responsible for assigning directories to mirror daemons +for synchronization. Multiple mirror daemons can be spawned to achieve +concurrency in directory snapshot synchronization. When mirror daemons are +spawned (or terminated), the mirroring module discovers the modified set of +mirror daemons and rebalances directory assignments across the new set, thus +providing high-availability. + +.. note:: Deploying a single mirror daemon is recommended. Running multiple + daemons is untested. + +The mirroring module is disabled by default. To enable the mirroring module, +run the following command: + +.. prompt:: bash $ + + ceph mgr module enable mirroring -.. note:: Multiple mirror daemons is currently untested. Only a single mirror daemon - is recommended. +The mirroring module provides a family of commands that can be used to control +the mirroring of directory snapshots. To add or remove directories, mirroring +must be enabled for a given file system. To enable mirroring for a given file +system, run a command of the following form: -Mirroring module is disabled by default. To enable mirroring use:: +.. prompt:: bash $ - $ ceph mgr module enable mirroring + ceph fs snapshot mirror enable -Mirroring module provides a family of commands to control mirroring of directory -snapshots. To add or remove directories, mirroring needs to be enabled for a given -file system. To enable mirroring use:: +.. note:: "Mirroring module" commands are prefixed with ``fs snapshot mirror``. + This distinguishes them from "monitor commands", which are prefixed with ``fs + mirror``. Be sure (in this context) to use module commands. - $ ceph fs snapshot mirror enable +To disable mirroring for a given file system, run a command of the following form: -.. note:: Mirroring module commands use `fs snapshot mirror` prefix as compared to - the monitor commands which `fs mirror` prefix. Make sure to use module - commands. +.. prompt:: bash $ -To disable mirroring, use:: + ceph fs snapshot mirror disable - $ ceph fs snapshot mirror disable +After mirroring is enabled, add a peer to which directory snapshots are to be +mirrored. Peers are specified by the ``@`` format, which is +referred to elsewhere in this document as the ``remote_cluster_spec``. Peers +are assigned a unique-id (UUID) when added. See the :ref:`Creating +Users` section for instructions that describe +how to create Ceph users for mirroring. -Once mirroring is enabled, add a peer to which directory snapshots are to be mirrored. -Peers follow `@` specification and get assigned a unique-id (UUID) -when added. See `Creating Users` section on how to create Ceph users for mirroring. +To add a peer, run a command of the following form: -To add a peer use:: +.. prompt:: bash $ - $ ceph fs snapshot mirror peer_add [] [] [] + ceph fs snapshot mirror peer_add [] [] [] -`` is optional, and defaults to `` (on the remote cluster). +```` is of the format ``client.@``. -This requires the remote cluster ceph configuration and user keyring to be available in -the primary cluster. See `Bootstrap Peers` section to avoid this. `peer_add` additionally -supports passing the remote cluster monitor address and the user key. However, bootstrapping -a peer is the recommended way to add a peer. +```` is optional, and defaults to `` (on the remote +cluster). + +For this command to succeed, the remote cluster's Ceph configuration and user +keyring must be available in the primary cluster. For example, if a user named +``client_mirror`` is created on the remote cluster which has ``rwps`` +permissions for the remote file system named ``remote_fs`` (see `Creating +Users`) and the remote cluster is named ``remote_ceph`` (that is, the remote +cluster configuration file is named ``remote_ceph.conf`` on the primary +cluster), run the following command to add the remote filesystem as a peer to +the primary filesystem ``primary_fs``: + +.. prompt:: bash $ + + ceph fs snapshot mirror peer_add primary_fs client.mirror_remote@remote_ceph remote_fs + +To avoid having to maintain the remote cluster configuration file and remote +ceph user keyring in the primary cluster, users can bootstrap a peer (which +stores the relevant remote cluster details in the monitor config store on the +primary cluster). See the :ref:`Bootstrap +Peers` section. + +The ``peer_add`` command supports passing the remote cluster monitor address +and the user key. However, bootstrapping a peer is the recommended way to add a +peer. .. note:: Only a single peer is supported right now. -To remove a peer use:: +To remove a peer, run a command of the following form: + +.. prompt:: bash $ + + ceph fs snapshot mirror peer_remove - $ ceph fs snapshot mirror peer_remove +To list file system mirror peers, run a command of the following form: -To list file system mirror peers use:: +.. prompt:: bash $ - $ ceph fs snapshot mirror peer_list + ceph fs snapshot mirror peer_list -To configure a directory for mirroring, use:: +To configure a directory for mirroring, run a command of the following form: - $ ceph fs snapshot mirror add +.. prompt:: bash $ -To stop a mirroring directory snapshots use:: + ceph fs snapshot mirror add - $ ceph fs snapshot mirror remove +To stop mirroring directory snapshots, run a command of the following form: -Only absolute directory paths are allowed. Also, paths are normalized by the mirroring -module, therfore, `/a/b/../b` is equivalent to `/a/b`. +.. prompt:: bash $ + + ceph fs snapshot mirror remove + +Only absolute directory paths are allowed. + +Paths are normalized by the mirroring module. This means that ``/a/b/../b`` is +equivalent to ``/a/b``. Paths always start from the CephFS file-system root and +not from the host system mount point. + +For example:: $ mkdir -p /d0/d1/d2 $ ceph fs snapshot mirror add cephfs /d0/d1/d2 @@ -123,16 +200,19 @@ $ ceph fs snapshot mirror add cephfs /d0/d1/../d1/d2 Error EEXIST: directory /d0/d1/d2 is already tracked -Once a directory is added for mirroring, its subdirectory or ancestor directories are -disallowed to be added for mirorring:: +After a directory is added for mirroring, the additional mirroring of +subdirectories or ancestor directories is disallowed:: $ ceph fs snapshot mirror add cephfs /d0/d1 Error EINVAL: /d0/d1 is a ancestor of tracked path /d0/d1/d2 $ ceph fs snapshot mirror add cephfs /d0/d1/d2/d3 Error EINVAL: /d0/d1/d2/d3 is a subtree of tracked path /d0/d1/d2 -Commands to check directory mapping (to mirror daemons) and directory distribution are -detailed in `Mirroring Status` section. +The :ref:`Mirroring Status` section contains +information about the commands for checking the directory mapping (to mirror +daemons) and for checking the directory distribution. + +.. _cephfs_mirroring_bootstrap_peers: Bootstrap Peers --------------- @@ -160,6 +240,9 @@ $ ceph fs snapshot mirror peer_bootstrap import cephfs eyJmc2lkIjogIjBkZjE3MjE3LWRmY2QtNDAzMC05MDc5LTM2Nzk4NTVkNDJlZiIsICJmaWxlc3lzdGVtIjogImJhY2t1cF9mcyIsICJ1c2VyIjogImNsaWVudC5taXJyb3JfcGVlcl9ib290c3RyYXAiLCAic2l0ZV9uYW1lIjogInNpdGUtcmVtb3RlIiwgImtleSI6ICJBUUFhcDBCZ0xtRmpOeEFBVnNyZXozai9YYUV0T2UrbUJEZlJDZz09IiwgIm1vbl9ob3N0IjogIlt2MjoxOTIuMTY4LjAuNTo0MDkxOCx2MToxOTIuMTY4LjAuNTo0MDkxOV0ifQ== + +.. _cephfs_mirroring_mirroring_status: + Mirroring Status ---------------- diff -Nru ceph-16.2.11+ds/doc/cephfs/cephfs-shell.rst ceph-16.2.15+ds/doc/cephfs/cephfs-shell.rst --- ceph-16.2.11+ds/doc/cephfs/cephfs-shell.rst 2023-01-24 20:43:13.000000000 +0000 +++ ceph-16.2.15+ds/doc/cephfs/cephfs-shell.rst 2024-02-26 19:21:09.000000000 +0000 @@ -37,7 +37,7 @@ .. code:: bash [build]$ python3 -m venv venv && source venv/bin/activate && pip3 install cmd2 - [build]$ source vstart_environment.sh && source venv/bin/activate && python3 ../src/tools/cephfs/cephfs-shell + [build]$ source vstart_environment.sh && source venv/bin/activate && python3 ../src/tools/cephfs/shell/cephfs-shell Commands ======== Binary files /srv/release.debian.org/tmp/QlzwoO8YfZ/ceph-16.2.11+ds/doc/cephfs/cephfs-top.png and /srv/release.debian.org/tmp/Msf9kTZzsp/ceph-16.2.15+ds/doc/cephfs/cephfs-top.png differ diff -Nru ceph-16.2.11+ds/doc/cephfs/cephfs-top.rst ceph-16.2.15+ds/doc/cephfs/cephfs-top.rst --- ceph-16.2.11+ds/doc/cephfs/cephfs-top.rst 2023-01-24 20:43:13.000000000 +0000 +++ ceph-16.2.15+ds/doc/cephfs/cephfs-top.rst 2024-02-26 19:21:09.000000000 +0000 @@ -78,7 +78,15 @@ $ cephfs-top -d -Interval should be greater or equal to 0.5 second. Fractional seconds are honoured. +Refresh interval should be a positive integer. + +To dump the metrics to stdout without creating a curses display use:: + + $ cephfs-top --dump + +To dump the metrics of the given filesystem to stdout without creating a curses display use:: + + $ cephfs-top --dumpfs Interactive Commands -------------------- @@ -86,8 +94,17 @@ 1. m : Filesystem selection Displays a menu of filesystems for selection. -2. q : Quit - Exit the utility if you are at the home screen (All Filesystem Info), +2. s : Sort field selection + Designates the sort field. 'cap_hit' is the default. + +3. l : Client limit + Sets the limit on the number of clients to be displayed. + +4. r : Reset + Resets the sort field and limit value to the default. + +5. q : Quit + Exit the utility if you are at the home screen (all filesystem info), otherwise escape back to the home screen. The metrics display can be scrolled using the Arrow Keys, PgUp/PgDn, Home/End and mouse. @@ -95,3 +112,5 @@ Sample screenshot running `cephfs-top` with 2 filesystems: .. image:: cephfs-top.png + +.. note:: Minimum compatible python version for cephfs-top is 3.6.0. cephfs-top is supported on distros RHEL 8, Ubuntu 18.04, CentOS 8 and above. diff -Nru ceph-16.2.11+ds/doc/cephfs/client-auth.rst ceph-16.2.15+ds/doc/cephfs/client-auth.rst --- ceph-16.2.11+ds/doc/cephfs/client-auth.rst 2023-01-24 20:43:13.000000000 +0000 +++ ceph-16.2.15+ds/doc/cephfs/client-auth.rst 2024-02-26 19:21:09.000000000 +0000 @@ -24,6 +24,16 @@ To restrict clients to only mount and work within a certain directory, use path-based MDS authentication capabilities. +Note that this restriction *only* impacts the filesystem hierarchy -- the metadata +tree managed by the MDS. Clients will still be able to access the underlying +file data in RADOS directly. To segregate clients fully, you must also isolate +untrusted clients in their own RADOS namespace. You can place a client's +filesystem subtree in a particular namespace using `file layouts`_ and then +restrict their RADOS access to that namespace using `OSD capabilities`_ + +.. _file layouts: ./file-layouts +.. _OSD capabilities: ../rados/operations/user-management/#authorization-capabilities + Syntax ------ diff -Nru ceph-16.2.11+ds/doc/cephfs/disaster-recovery-experts.rst ceph-16.2.15+ds/doc/cephfs/disaster-recovery-experts.rst --- ceph-16.2.11+ds/doc/cephfs/disaster-recovery-experts.rst 2023-01-24 20:43:13.000000000 +0000 +++ ceph-16.2.15+ds/doc/cephfs/disaster-recovery-experts.rst 2024-02-26 19:21:09.000000000 +0000 @@ -149,8 +149,8 @@ :: - cephfs-data-scan scan_extents - cephfs-data-scan scan_inodes + cephfs-data-scan scan_extents [ [ ...]] + cephfs-data-scan scan_inodes [] cephfs-data-scan scan_links 'scan_extents' and 'scan_inodes' commands may take a *very long* time @@ -166,22 +166,22 @@ :: # Worker 0 - cephfs-data-scan scan_extents --worker_n 0 --worker_m 4 + cephfs-data-scan scan_extents --worker_n 0 --worker_m 4 # Worker 1 - cephfs-data-scan scan_extents --worker_n 1 --worker_m 4 + cephfs-data-scan scan_extents --worker_n 1 --worker_m 4 # Worker 2 - cephfs-data-scan scan_extents --worker_n 2 --worker_m 4 + cephfs-data-scan scan_extents --worker_n 2 --worker_m 4 # Worker 3 - cephfs-data-scan scan_extents --worker_n 3 --worker_m 4 + cephfs-data-scan scan_extents --worker_n 3 --worker_m 4 # Worker 0 - cephfs-data-scan scan_inodes --worker_n 0 --worker_m 4 + cephfs-data-scan scan_inodes --worker_n 0 --worker_m 4 # Worker 1 - cephfs-data-scan scan_inodes --worker_n 1 --worker_m 4 + cephfs-data-scan scan_inodes --worker_n 1 --worker_m 4 # Worker 2 - cephfs-data-scan scan_inodes --worker_n 2 --worker_m 4 + cephfs-data-scan scan_inodes --worker_n 2 --worker_m 4 # Worker 3 - cephfs-data-scan scan_inodes --worker_n 3 --worker_m 4 + cephfs-data-scan scan_inodes --worker_n 3 --worker_m 4 It is **important** to ensure that all workers have completed the scan_extents phase before any workers enter the scan_inodes phase. @@ -191,8 +191,13 @@ :: - cephfs-data-scan cleanup + cephfs-data-scan cleanup [] +Note, the data pool parameters for 'scan_extents', 'scan_inodes' and +'cleanup' commands are optional, and usually the tool will be able to +detect the pools automatically. Still you may override this. The +'scan_extents' command needs all data pools to be specified, while +'scan_inodes' and 'cleanup' commands need only the main data pool. Using an alternate metadata pool for recovery @@ -229,35 +234,29 @@ :: - ceph fs flag set enable_multiple true --yes-i-really-mean-it ceph osd pool create cephfs_recovery_meta - ceph fs new cephfs_recovery recovery --allow-dangerous-metadata-overlay + ceph fs new cephfs_recovery recovery --recover --allow-dangerous-metadata-overlay +.. note:: -The recovery file system starts with an MDS rank that will initialize the new -metadata pool with some metadata. This is necessary to bootstrap recovery. -However, now we will take the MDS down as we do not want it interacting with -the metadata pool further. - -:: - - ceph fs fail cephfs_recovery + The ``--recover`` flag prevents any MDS from joining the new file system. -Next, we will reset the initial metadata the MDS created: +Next, we will create the intial metadata for the fs: :: - cephfs-table-tool cephfs_recovery:all reset session - cephfs-table-tool cephfs_recovery:all reset snap - cephfs-table-tool cephfs_recovery:all reset inode + cephfs-table-tool cephfs_recovery:0 reset session + cephfs-table-tool cephfs_recovery:0 reset snap + cephfs-table-tool cephfs_recovery:0 reset inode + cephfs-journal-tool --rank cephfs_recovery:0 journal reset --force Now perform the recovery of the metadata pool from the data pool: :: cephfs-data-scan init --force-init --filesystem cephfs_recovery --alternate-pool cephfs_recovery_meta - cephfs-data-scan scan_extents --alternate-pool cephfs_recovery_meta --filesystem - cephfs-data-scan scan_inodes --alternate-pool cephfs_recovery_meta --filesystem --force-corrupt + cephfs-data-scan scan_extents --alternate-pool cephfs_recovery_meta --filesystem + cephfs-data-scan scan_inodes --alternate-pool cephfs_recovery_meta --filesystem --force-corrupt cephfs-data-scan scan_links --filesystem cephfs_recovery .. note:: @@ -272,7 +271,6 @@ :: cephfs-journal-tool --rank=:0 event recover_dentries list --alternate-pool cephfs_recovery_meta - cephfs-journal-tool --rank cephfs_recovery:0 journal reset --force After recovery, some recovered directories will have incorrect statistics. Ensure the parameters ``mds_verify_scatter`` and ``mds_debug_scatterstat`` are @@ -283,20 +281,22 @@ ceph config rm mds mds_verify_scatter ceph config rm mds mds_debug_scatterstat -(Note, the config may also have been set globally or via a ceph.conf file.) +.. note:: + + Also verify the config has not been set globally or with a local ceph.conf file. + Now, allow an MDS to join the recovery file system: :: ceph fs set cephfs_recovery joinable true -Finally, run a forward :doc:`scrub ` to repair the statistics. +Finally, run a forward :doc:`scrub ` to repair recursive statistics. Ensure you have an MDS running and issue: :: - ceph fs status # get active MDS - ceph tell mds. scrub start / recursive repair + ceph tell mds.recovery_fs:0 scrub start / recursive,repair,force .. note:: diff -Nru ceph-16.2.11+ds/doc/cephfs/fs-volumes.rst ceph-16.2.15+ds/doc/cephfs/fs-volumes.rst --- ceph-16.2.11+ds/doc/cephfs/fs-volumes.rst 2023-01-24 20:43:13.000000000 +0000 +++ ceph-16.2.15+ds/doc/cephfs/fs-volumes.rst 2024-02-26 19:21:09.000000000 +0000 @@ -3,100 +3,124 @@ FS volumes and subvolumes ========================= -A single source of truth for CephFS exports is implemented in the volumes -module of the :term:`Ceph Manager` daemon (ceph-mgr). The OpenStack shared -file system service (manila_), Ceph Container Storage Interface (CSI_), -storage administrators among others can use the common CLI provided by the -ceph-mgr volumes module to manage the CephFS exports. +The volumes module of the :term:`Ceph Manager` daemon (ceph-mgr) provides a +single source of truth for CephFS exports. The OpenStack shared file system +service (manila_) and the Ceph Container Storage Interface (CSI_) storage +administrators use the common CLI provided by the ceph-mgr ``volumes`` module +to manage CephFS exports. -The ceph-mgr volumes module implements the following file system export -abstactions: +The ceph-mgr ``volumes`` module implements the following file system export +abstractions: * FS volumes, an abstraction for CephFS file systems * FS subvolumes, an abstraction for independent CephFS directory trees * FS subvolume groups, an abstraction for a directory level higher than FS - subvolumes to effect policies (e.g., :doc:`/cephfs/file-layouts`) across a - set of subvolumes + subvolumes. Used to effect policies (e.g., :doc:`/cephfs/file-layouts`) + across a set of subvolumes Some possible use-cases for the export abstractions: -* FS subvolumes used as manila shares or CSI volumes +* FS subvolumes used as Manila shares or CSI volumes -* FS subvolume groups used as manila share groups +* FS subvolume groups used as Manila share groups Requirements ------------ -* Nautilus (14.2.x) or a later version of Ceph +* Nautilus (14.2.x) or later Ceph release * Cephx client user (see :doc:`/rados/operations/user-management`) with - the following minimum capabilities:: + at least the following capabilities:: mon 'allow r' mgr 'allow rw' - FS Volumes ---------- -Create a volume using:: +Create a volume by running the following command: - $ ceph fs volume create [] +.. prompt:: bash $ -This creates a CephFS file system and its data and metadata pools. It can also -try to create MDSes for the filesystem using the enabled ceph-mgr orchestrator -module (see :doc:`/mgr/orchestrator`), e.g. rook. + ceph fs volume create [] - is the volume name (an arbitrary string), and +This creates a CephFS file system and its data and metadata pools. It can also +deploy MDS daemons for the filesystem using a ceph-mgr orchestrator module (for +example Rook). See :doc:`/mgr/orchestrator`. - is an optional string signifying which hosts should have NFS Ganesha -daemon containers running on them and, optionally, the total number of NFS -Ganesha daemons the cluster (should you want to have more than one NFS Ganesha -daemon running per node). For example, the following placement string means -"deploy NFS Ganesha daemons on nodes host1 and host2 (one daemon per host): +```` is the volume name (an arbitrary string). ```` is an +optional string that specifies the hosts that should have an MDS running on +them and, optionally, the total number of MDS daemons that the cluster should +have. For example, the following placement string means "deploy MDS on nodes +``host1`` and ``host2`` (one MDS per host):: "host1,host2" -and this placement specification says to deploy two NFS Ganesha daemons each -on nodes host1 and host2 (for a total of four NFS Ganesha daemons in the -cluster): +The following placement specification means "deploy two MDS daemons on each of +nodes ``host1`` and ``host2`` (for a total of four MDS daemons in the +cluster)":: "4 host1,host2" -For more details on placement specification refer to the `orchestrator doc -`_ -but keep in mind that specifying the placement via a YAML file is not supported. +See :ref:`orchestrator-cli-service-spec` for more on placement specification. +Specifying placement via a YAML file is not supported. -Remove a volume using:: +To remove a volume, run the following command: - $ ceph fs volume rm [--yes-i-really-mean-it] +.. prompt:: bash $ + + ceph fs volume rm [--yes-i-really-mean-it] This removes a file system and its data and metadata pools. It also tries to -remove MDSes using the enabled ceph-mgr orchestrator module. +remove MDS daemons using the enabled ceph-mgr orchestrator module. + +List volumes by running the following command: + +.. prompt:: bash $ + + ceph fs volume ls + +Rename a volume by running the following command: + +.. prompt:: bash $ -List volumes using:: + ceph fs volume rename [--yes-i-really-mean-it] - $ ceph fs volume ls +Renaming a volume can be an expensive operation that requires the following: -Fetch the information of a CephFS volume using:: +- Renaming the orchestrator-managed MDS service to match the . + This involves launching a MDS service with ```` and bringing + down the MDS service with ````. +- Renaming the file system matching ```` to ````. +- Changing the application tags on the data and metadata pools of the file system + to ````. +- Renaming the metadata and data pools of the file system. - $ ceph fs volume info vol_name [--human_readable] +The CephX IDs that are authorized for ```` must be reauthorized for +````. Any ongoing operations of the clients using these IDs may +be disrupted. Ensure that mirroring is disabled on the volume. + +To fetch the information of a CephFS volume, run the following command: + +.. prompt:: bash $ + + ceph fs volume info vol_name [--human_readable] The ``--human_readable`` flag shows used and available pool capacities in KB/MB/GB. The output format is JSON and contains fields as follows: -* pools: Attributes of data and metadata pools - * avail: The amount of free space available in bytes - * used: The amount of storage consumed in bytes - * name: Name of the pool -* mon_addrs: List of monitor addresses -* used_size: Current used size of the CephFS volume in bytes -* pending_subvolume_deletions: Number of subvolumes pending deletion +* ``pools``: Attributes of data and metadata pools + * ``avail``: The amount of free space available in bytes + * ``used``: The amount of storage consumed in bytes + * ``name``: Name of the pool +* ``mon_addrs``: List of Ceph monitor addresses +* ``used_size``: Current used size of the CephFS volume in bytes +* ``pending_subvolume_deletions``: Number of subvolumes pending deletion -Sample output of volume info command:: +Sample output of the ``volume info`` command:: $ ceph fs volume info vol_name { @@ -126,114 +150,140 @@ FS Subvolume groups ------------------- -Create a subvolume group using:: +Create a subvolume group by running the following command: - $ ceph fs subvolumegroup create [--size ] [--pool_layout ] [--uid ] [--gid ] [--mode ] +.. prompt:: bash $ + + ceph fs subvolumegroup create [--size ] [--pool_layout ] [--uid ] [--gid ] [--mode ] The command succeeds even if the subvolume group already exists. When creating a subvolume group you can specify its data pool layout (see -:doc:`/cephfs/file-layouts`), uid, gid, file mode in octal numerals and +:doc:`/cephfs/file-layouts`), uid, gid, file mode in octal numerals, and size in bytes. The size of the subvolume group is specified by setting a quota on it (see :doc:`/cephfs/quota`). By default, the subvolume group -is created with an octal file mode '755', uid '0', gid '0' and the data pool +is created with octal file mode ``755``, uid ``0``, gid ``0`` and the data pool layout of its parent directory. +Remove a subvolume group by running a command of the following form: + +.. prompt:: bash $ -Remove a subvolume group using:: + ceph fs subvolumegroup rm [--force] - $ ceph fs subvolumegroup rm [--force] +The removal of a subvolume group fails if the subvolume group is not empty or +is non-existent. The ``--force`` flag allows the non-existent "subvolume group +remove command" to succeed. -The removal of a subvolume group fails if it is not empty or non-existent. -'--force' flag allows the non-existent subvolume group remove command to succeed. +Fetch the absolute path of a subvolume group by running a command of the +following form: -Fetch the absolute path of a subvolume group using:: +.. prompt:: bash $ - $ ceph fs subvolumegroup getpath + ceph fs subvolumegroup getpath -List subvolume groups using:: +List subvolume groups by running a command of the following form: - $ ceph fs subvolumegroup ls +.. prompt:: bash $ + + ceph fs subvolumegroup ls .. note:: Subvolume group snapshot feature is no longer supported in mainline CephFS (existing group snapshots can still be listed and deleted) -Fetch the metadata of a subvolume group using:: +Fetch the metadata of a subvolume group by running a command of the following form: - $ ceph fs subvolumegroup info +.. prompt:: bash $ -The output format is JSON and contains fields as follows. + ceph fs subvolumegroup info + +The output format is JSON and contains fields as follows: + +* ``atime``: access time of the subvolume group path in the format "YYYY-MM-DD HH:MM:SS" +* ``mtime``: modification time of the subvolume group path in the format "YYYY-MM-DD HH:MM:SS" +* ``ctime``: change time of the subvolume group path in the format "YYYY-MM-DD HH:MM:SS" +* ``uid``: uid of the subvolume group path +* ``gid``: gid of the subvolume group path +* ``mode``: mode of the subvolume group path +* ``mon_addrs``: list of monitor addresses +* ``bytes_pcent``: quota used in percentage if quota is set, else displays "undefined" +* ``bytes_quota``: quota size in bytes if quota is set, else displays "infinite" +* ``bytes_used``: current used size of the subvolume group in bytes +* ``created_at``: creation time of the subvolume group in the format "YYYY-MM-DD HH:MM:SS" +* ``data_pool``: data pool to which the subvolume group belongs -* atime: access time of subvolume group path in the format "YYYY-MM-DD HH:MM:SS" -* mtime: modification time of subvolume group path in the format "YYYY-MM-DD HH:MM:SS" -* ctime: change time of subvolume group path in the format "YYYY-MM-DD HH:MM:SS" -* uid: uid of subvolume group path -* gid: gid of subvolume group path -* mode: mode of subvolume group path -* mon_addrs: list of monitor addresses -* bytes_pcent: quota used in percentage if quota is set, else displays "undefined" -* bytes_quota: quota size in bytes if quota is set, else displays "infinite" -* bytes_used: current used size of the subvolume group in bytes -* created_at: time of creation of subvolume group in the format "YYYY-MM-DD HH:MM:SS" -* data_pool: data pool the subvolume group belongs to +Check the presence of any subvolume group by running a command of the following form: -Check the presence of any subvolume group using:: +.. prompt:: bash $ - $ ceph fs subvolumegroup exist + ceph fs subvolumegroup exist -The strings returned by the 'exist' command: +The ``exist`` command outputs: * "subvolumegroup exists": if any subvolumegroup is present * "no subvolumegroup exists": if no subvolumegroup is present -.. note:: It checks for the presence of custom groups and not the default one. To validate the emptiness of the volume, subvolumegroup existence check alone is not sufficient. The subvolume existence also needs to be checked as there might be subvolumes in the default group. +.. note:: This command checks for the presence of custom groups and not + presence of the default one. To validate the emptiness of the volume, a + subvolumegroup existence check alone is not sufficient. Subvolume existence + also needs to be checked as there might be subvolumes in the default group. -Resize a subvolume group using:: +Resize a subvolume group by running a command of the following form: - $ ceph fs subvolumegroup resize [--no_shrink] +.. prompt:: bash $ -The command resizes the subvolume group quota using the size specified by 'new_size'. -The '--no_shrink' flag prevents the subvolume group to shrink below the current used -size of the subvolume group. + ceph fs subvolumegroup resize [--no_shrink] -The subvolume group can be resized to an unlimited size by passing 'inf' or 'infinite' -as the new_size. +The command resizes the subvolume group quota, using the size specified by +``new_size``. The ``--no_shrink`` flag prevents the subvolume group from +shrinking below the current used size. -Remove a snapshot of a subvolume group using:: +The subvolume group may be resized to an infinite size by passing ``inf`` or +``infinite`` as the ``new_size``. - $ ceph fs subvolumegroup snapshot rm [--force] +Remove a snapshot of a subvolume group by running a command of the following form: -Using the '--force' flag allows the command to succeed that would otherwise -fail if the snapshot did not exist. +.. prompt:: bash $ + + ceph fs subvolumegroup snapshot rm [--force] + +Supplying the ``--force`` flag allows the command to succeed when it would otherwise +fail due to the nonexistence of the snapshot. -List snapshots of a subvolume group using:: +List snapshots of a subvolume group by running a command of the following form: - $ ceph fs subvolumegroup snapshot ls +.. prompt:: bash $ + + ceph fs subvolumegroup snapshot ls FS Subvolumes ------------- -Create a subvolume using:: +Create a subvolume using: + +.. prompt:: bash $ - $ ceph fs subvolume create [--size ] [--group_name ] [--pool_layout ] [--uid ] [--gid ] [--mode ] [--namespace-isolated] + ceph fs subvolume create [--size ] [--group_name ] [--pool_layout ] [--uid ] [--gid ] [--mode ] [--namespace-isolated] The command succeeds even if the subvolume already exists. -When creating a subvolume you can specify its subvolume group, data pool layout, -uid, gid, file mode in octal numerals, and size in bytes. The size of the subvolume is -specified by setting a quota on it (see :doc:`/cephfs/quota`). The subvolume can be -created in a separate RADOS namespace by specifying --namespace-isolated option. By -default a subvolume is created within the default subvolume group, and with an octal file -mode '755', uid of its subvolume group, gid of its subvolume group, data pool layout of -its parent directory and no size limit. +When creating a subvolume you can specify its subvolume group, data pool +layout, uid, gid, file mode in octal numerals, and size in bytes. The size of +the subvolume is specified by setting a quota on it (see :doc:`/cephfs/quota`). +The subvolume can be created in a separate RADOS namespace by specifying +--namespace-isolated option. By default a subvolume is created within the +default subvolume group, and with an octal file mode '755', uid of its +subvolume group, gid of its subvolume group, data pool layout of its parent +directory and no size limit. -Remove a subvolume using:: +Remove a subvolume using: - $ ceph fs subvolume rm [--group_name ] [--force] [--retain-snapshots] +.. prompt:: bash $ + ceph fs subvolume rm [--group_name ] [--force] [--retain-snapshots] The command removes the subvolume and its contents. It does this in two steps. First, it moves the subvolume to a trash folder, and then asynchronously purges @@ -246,154 +296,198 @@ '--retain-snapshots' option. If snapshots are retained, the subvolume is considered empty for all operations not involving the retained snapshots. -.. note:: Snapshot retained subvolumes can be recreated using 'ceph fs subvolume create' +.. note:: Snapshot retained subvolumes can be recreated using 'ceph fs + subvolume create' + +.. note:: Retained snapshots can be used as a clone source to recreate the + subvolume, or clone to a newer subvolume. + +Resize a subvolume using: + +.. prompt:: bash $ + + ceph fs subvolume resize [--group_name ] [--no_shrink] + +The command resizes the subvolume quota using the size specified by +``new_size``. The `--no_shrink`` flag prevents the subvolume from shrinking +below the current used size of the subvolume. + +The subvolume can be resized to an unlimited (but sparse) logical size by +passing ``inf`` or ``infinite`` as `` new_size``. + +Authorize cephx auth IDs, the read/read-write access to fs subvolumes: + +.. prompt:: bash $ + + ceph fs subvolume authorize [--group_name=] [--access_level=] + +The ``access_level`` takes ``r`` or ``rw`` as value. + +Deauthorize cephx auth IDs, the read/read-write access to fs subvolumes: + +.. prompt:: bash $ -.. note:: Retained snapshots can be used as a clone source to recreate the subvolume, or clone to a newer subvolume. + ceph fs subvolume deauthorize [--group_name=] -Resize a subvolume using:: +List cephx auth IDs authorized to access fs subvolume: - $ ceph fs subvolume resize [--group_name ] [--no_shrink] +.. prompt:: bash $ -The command resizes the subvolume quota using the size specified by 'new_size'. -'--no_shrink' flag prevents the subvolume to shrink below the current used size of the subvolume. + ceph fs subvolume authorized_list [--group_name=] -The subvolume can be resized to an infinite size by passing 'inf' or 'infinite' as the new_size. +Evict fs clients based on auth ID and subvolume mounted: -Authorize cephx auth IDs, the read/read-write access to fs subvolumes:: +.. prompt:: bash $ - $ ceph fs subvolume authorize [--group_name=] [--access_level=] + ceph fs subvolume evict [--group_name=] -The 'access_level' takes 'r' or 'rw' as value. +Fetch the absolute path of a subvolume using: -Deauthorize cephx auth IDs, the read/read-write access to fs subvolumes:: +.. prompt:: bash $ - $ ceph fs subvolume deauthorize [--group_name=] + ceph fs subvolume getpath [--group_name ] -List cephx auth IDs authorized to access fs subvolume:: +Fetch the information of a subvolume using: - $ ceph fs subvolume authorized_list [--group_name=] +.. prompt:: bash $ -Evict fs clients based on auth ID and subvolume mounted:: + ceph fs subvolume info [--group_name ] - $ ceph fs subvolume evict [--group_name=] +The output format is JSON and contains fields as follows. + +* ``atime``: access time of the subvolume path in the format "YYYY-MM-DD HH:MM:SS" +* ``mtime``: modification time of the subvolume path in the format "YYYY-MM-DD HH:MM:SS" +* ``ctime``: change time of the subvolume path in the format "YYYY-MM-DD HH:MM:SS" +* ``uid``: uid of the subvolume path +* ``gid``: gid of the subvolume path +* ``mode``: mode of the subvolume path +* ``mon_addrs``: list of monitor addresses +* ``bytes_pcent``: quota used in percentage if quota is set, else displays ``undefined`` +* ``bytes_quota``: quota size in bytes if quota is set, else displays ``infinite`` +* ``bytes_used``: current used size of the subvolume in bytes +* ``created_at``: creation time of the subvolume in the format "YYYY-MM-DD HH:MM:SS" +* ``data_pool``: data pool to which the subvolume belongs +* ``path``: absolute path of a subvolume +* ``type``: subvolume type indicating whether it's clone or subvolume +* ``pool_namespace``: RADOS namespace of the subvolume +* ``features``: features supported by the subvolume +* ``state``: current state of the subvolume -Fetch the absolute path of a subvolume using:: +If a subvolume has been removed retaining its snapshots, the output contains only fields as follows. - $ ceph fs subvolume getpath [--group_name ] +* ``type``: subvolume type indicating whether it's clone or subvolume +* ``features``: features supported by the subvolume +* ``state``: current state of the subvolume -Fetch the information of a subvolume using:: +A subvolume's ``features`` are based on the internal version of the subvolume and are +a subset of the following: - $ ceph fs subvolume info [--group_name ] +* ``snapshot-clone``: supports cloning using a subvolumes snapshot as the source +* ``snapshot-autoprotect``: supports automatically protecting snapshots, that are active clone sources, from deletion +* ``snapshot-retention``: supports removing subvolume contents, retaining any existing snapshots -The output format is json and contains fields as follows. +A subvolume's ``state`` is based on the current state of the subvolume and contains one of the following values. -* atime: access time of subvolume path in the format "YYYY-MM-DD HH:MM:SS" -* mtime: modification time of subvolume path in the format "YYYY-MM-DD HH:MM:SS" -* ctime: change time of subvolume path in the format "YYYY-MM-DD HH:MM:SS" -* uid: uid of subvolume path -* gid: gid of subvolume path -* mode: mode of subvolume path -* mon_addrs: list of monitor addresses -* bytes_pcent: quota used in percentage if quota is set, else displays "undefined" -* bytes_quota: quota size in bytes if quota is set, else displays "infinite" -* bytes_used: current used size of the subvolume in bytes -* created_at: time of creation of subvolume in the format "YYYY-MM-DD HH:MM:SS" -* data_pool: data pool the subvolume belongs to -* path: absolute path of a subvolume -* type: subvolume type indicating whether it's clone or subvolume -* pool_namespace: RADOS namespace of the subvolume -* features: features supported by the subvolume -* state: current state of the subvolume +* ``complete``: subvolume is ready for all operations +* ``snapshot-retained``: subvolume is removed but its snapshots are retained -If a subvolume has been removed retaining its snapshots, the output only contains fields as follows. +List subvolumes using: -* type: subvolume type indicating whether it's clone or subvolume -* features: features supported by the subvolume -* state: current state of the subvolume +.. prompt:: bash $ -The subvolume "features" are based on the internal version of the subvolume and is a list containing -a subset of the following features, + ceph fs subvolume ls [--group_name ] -* "snapshot-clone": supports cloning using a subvolumes snapshot as the source -* "snapshot-autoprotect": supports automatically protecting snapshots, that are active clone sources, from deletion -* "snapshot-retention": supports removing subvolume contents, retaining any existing snapshots +.. note:: subvolumes that are removed but have snapshots retained, are also + listed. -The subvolume "state" is based on the current state of the subvolume and contains one of the following values. +Check the presence of any subvolume using: -* "complete": subvolume is ready for all operations -* "snapshot-retained": subvolume is removed but its snapshots are retained +.. prompt:: bash $ -List subvolumes using:: + ceph fs subvolume exist [--group_name ] - $ ceph fs subvolume ls [--group_name ] +These are the possible results of the ``exist`` command: -.. note:: subvolumes that are removed but have snapshots retained, are also listed. +* ``subvolume exists``: if any subvolume of given group_name is present +* ``no subvolume exists``: if no subvolume of given group_name is present -Check the presence of any subvolume using:: +Set custom metadata on the subvolume as a key-value pair using: - $ ceph fs subvolume exist [--group_name ] +.. prompt:: bash $ -The strings returned by the 'exist' command: + ceph fs subvolume metadata set [--group_name ] -* "subvolume exists": if any subvolume of given group_name is present -* "no subvolume exists": if no subvolume of given group_name is present +.. note:: If the key_name already exists then the old value will get replaced + by the new value. -Set custom metadata on the subvolume as a key-value pair using:: +.. note:: key_name and value should be a string of ASCII characters (as + specified in python's string.printable). key_name is case-insensitive and + always stored in lower case. - $ ceph fs subvolume metadata set [--group_name ] +.. note:: Custom metadata on a subvolume is not preserved when snapshotting the + subvolume, and hence, is also not preserved when cloning the subvolume + snapshot. -.. note:: If the key_name already exists then the old value will get replaced by the new value. +Get custom metadata set on the subvolume using the metadata key: -.. note:: key_name and value should be a string of ASCII characters (as specified in python's string.printable). key_name is case-insensitive and always stored in lower case. +.. prompt:: bash $ -.. note:: Custom metadata on a subvolume is not preserved when snapshotting the subvolume, and hence, is also not preserved when cloning the subvolume snapshot. + ceph fs subvolume metadata get [--group_name ] -Get custom metadata set on the subvolume using the metadata key:: +List custom metadata (key-value pairs) set on the subvolume using: - $ ceph fs subvolume metadata get [--group_name ] +.. prompt:: bash $ -List custom metadata (key-value pairs) set on the subvolume using:: + ceph fs subvolume metadata ls [--group_name ] - $ ceph fs subvolume metadata ls [--group_name ] +Remove custom metadata set on the subvolume using the metadata key: -Remove custom metadata set on the subvolume using the metadata key:: +.. prompt:: bash $ - $ ceph fs subvolume metadata rm [--group_name ] [--force] + ceph fs subvolume metadata rm [--group_name ] [--force] -Using the '--force' flag allows the command to succeed that would otherwise +Using the ``--force`` flag allows the command to succeed that would otherwise fail if the metadata key did not exist. -Create a snapshot of a subvolume using:: +Create a snapshot of a subvolume using: + +.. prompt:: bash $ - $ ceph fs subvolume snapshot create [--group_name ] + ceph fs subvolume snapshot create [--group_name ] +Remove a snapshot of a subvolume using: -Remove a snapshot of a subvolume using:: +.. prompt:: bash $ - $ ceph fs subvolume snapshot rm [--group_name ] [--force] + ceph fs subvolume snapshot rm [--group_name ] [--force] -Using the '--force' flag allows the command to succeed that would otherwise +Using the ``--force`` flag allows the command to succeed that would otherwise fail if the snapshot did not exist. -.. note:: if the last snapshot within a snapshot retained subvolume is removed, the subvolume is also removed +.. note:: if the last snapshot within a snapshot retained subvolume is removed, + the subvolume is also removed + +List snapshots of a subvolume using: + +.. prompt:: bash $ -List snapshots of a subvolume using:: + ceph fs subvolume snapshot ls [--group_name ] - $ ceph fs subvolume snapshot ls [--group_name ] +Fetch the information of a snapshot using: -Fetch the information of a snapshot using:: +.. prompt:: bash $ - $ ceph fs subvolume snapshot info [--group_name ] + ceph fs subvolume snapshot info [--group_name ] The output format is JSON and contains fields as follows. -* created_at: time of creation of snapshot in the format "YYYY-MM-DD HH:MM:SS:ffffff" -* data_pool: data pool the snapshot belongs to -* has_pending_clones: "yes" if snapshot clone is in progress otherwise "no" -* pending_clones: list of in progress or pending clones and their target group if exist otherwise this field is not shown -* orphan_clones_count: count of orphan clones if snapshot has orphan clones otherwise this field is not shown +* ``created_at``: creation time of the snapshot in the format "YYYY-MM-DD HH:MM:SS:ffffff" +* ``data_pool``: data pool to which the snapshot belongs +* ``has_pending_clones``: ``yes`` if snapshot clone is in progress, otherwise ``no`` +* ``pending_clones``: list of in-progress or pending clones and their target group if any exist, otherwise this field is not shown +* ``orphan_clones_count``: count of orphan clones if the snapshot has orphan clones, otherwise this field is not shown -Sample output when snapshot clones are in progress or pending state:: +Sample output when snapshot clones are in progress or pending:: $ ceph fs subvolume snapshot info cephfs subvol snap { @@ -415,7 +509,7 @@ ] } -Sample output when no snapshot clone is in progress or pending state:: +Sample output when no snapshot clone is in progress or pending:: $ ceph fs subvolume snapshot info cephfs subvol snap { @@ -424,90 +518,129 @@ "has_pending_clones": "no" } -Set custom metadata on the snapshot as a key-value pair using:: +Set custom key-value metadata on the snapshot by running: + +.. prompt:: bash $ + + ceph fs subvolume snapshot metadata set [--group_name ] + +.. note:: If the key_name already exists then the old value will get replaced + by the new value. + +.. note:: The key_name and value should be a strings of ASCII characters (as + specified in Python's ``string.printable``). The key_name is + case-insensitive and always stored in lowercase. - $ ceph fs subvolume snapshot metadata set [--group_name ] +.. note:: Custom metadata on a snapshot is not preserved when snapshotting the + subvolume, and hence is also not preserved when cloning the subvolume + snapshot. -.. note:: If the key_name already exists then the old value will get replaced by the new value. +Get custom metadata set on the snapshot using the metadata key: -.. note:: The key_name and value should be a string of ASCII characters (as specified in python's string.printable). The key_name is case-insensitive and always stored in lower case. +.. prompt:: bash $ -.. note:: Custom metadata on a snapshots is not preserved when snapshotting the subvolume, and hence, is also not preserved when cloning the subvolume snapshot. + ceph fs subvolume snapshot metadata get [--group_name ] -Get custom metadata set on the snapshot using the metadata key:: +List custom metadata (key-value pairs) set on the snapshot using: - $ ceph fs subvolume snapshot metadata get [--group_name ] +.. prompt:: bash $ -List custom metadata (key-value pairs) set on the snapshot using:: + ceph fs subvolume snapshot metadata ls [--group_name ] - $ ceph fs subvolume snapshot metadata ls [--group_name ] +Remove custom metadata set on the snapshot using the metadata key: -Remove custom metadata set on the snapshot using the metadata key:: +.. prompt:: bash $ - $ ceph fs subvolume snapshot metadata rm [--group_name ] [--force] + ceph fs subvolume snapshot metadata rm [--group_name ] [--force] -Using the '--force' flag allows the command to succeed that would otherwise +Using the ``--force`` flag allows the command to succeed that would otherwise fail if the metadata key did not exist. Cloning Snapshots ----------------- -Subvolumes can be created by cloning subvolume snapshots. Cloning is an asynchronous operation involving copying -data from a snapshot to a subvolume. Due to this bulk copy nature, cloning is currently inefficient for very huge -data sets. - -.. note:: Removing a snapshot (source subvolume) would fail if there are pending or in progress clone operations. - -Protecting snapshots prior to cloning was a pre-requisite in the Nautilus release, and the commands to protect/unprotect -snapshots were introduced for this purpose. This pre-requisite, and hence the commands to protect/unprotect, is being -deprecated in mainline CephFS, and may be removed from a future release. +Subvolumes can be created by cloning subvolume snapshots. Cloning is an +asynchronous operation that copies data from a snapshot to a subvolume. Due to +this bulk copying, cloning is inefficient for very large data sets. + +.. note:: Removing a snapshot (source subvolume) would fail if there are + pending or in progress clone operations. + +Protecting snapshots prior to cloning was a prerequisite in the Nautilus +release, and the commands to protect/unprotect snapshots were introduced for +this purpose. This prerequisite, and hence the commands to protect/unprotect, +is being deprecated and may be removed from a future release. The commands being deprecated are: - $ ceph fs subvolume snapshot protect [--group_name ] - $ ceph fs subvolume snapshot unprotect [--group_name ] -.. note:: Using the above commands would not result in an error, but they serve no useful function. +.. prompt:: bash # -.. note:: Use subvolume info command to fetch subvolume metadata regarding supported "features" to help decide if protect/unprotect of snapshots is required, based on the "snapshot-autoprotect" feature availability. + ceph fs subvolume snapshot protect [--group_name ] + ceph fs subvolume snapshot unprotect [--group_name ] -To initiate a clone operation use:: +.. note:: Using the above commands will not result in an error, but they have + no useful purpose. - $ ceph fs subvolume snapshot clone +.. note:: Use the ``subvolume info`` command to fetch subvolume metadata + regarding supported ``features`` to help decide if protect/unprotect of + snapshots is required, based on the availability of the + ``snapshot-autoprotect`` feature. -If a snapshot (source subvolume) is a part of non-default group, the group name needs to be specified as per:: +To initiate a clone operation use: - $ ceph fs subvolume snapshot clone --group_name +.. prompt:: bash $ -Cloned subvolumes can be a part of a different group than the source snapshot (by default, cloned subvolumes are created in default group). To clone to a particular group use:: + ceph fs subvolume snapshot clone + +If a snapshot (source subvolume) is a part of non-default group, the group name +needs to be specified: + +.. prompt:: bash $ + + ceph fs subvolume snapshot clone --group_name + +Cloned subvolumes can be a part of a different group than the source snapshot +(by default, cloned subvolumes are created in default group). To clone to a +particular group use: + +.. prompt:: bash $ $ ceph fs subvolume snapshot clone --target_group_name -Similar to specifying a pool layout when creating a subvolume, pool layout can be specified when creating a cloned subvolume. To create a cloned subvolume with a specific pool layout use:: +Similar to specifying a pool layout when creating a subvolume, pool layout can +be specified when creating a cloned subvolume. To create a cloned subvolume +with a specific pool layout use: - $ ceph fs subvolume snapshot clone --pool_layout +.. prompt:: bash $ -Configure maximum number of concurrent clones. The default is set to 4:: + ceph fs subvolume snapshot clone --pool_layout - $ ceph config set mgr mgr/volumes/max_concurrent_clones +Configure the maximum number of concurrent clones. The default is 4: -To check the status of a clone operation use:: +.. prompt:: bash $ - $ ceph fs clone status [--group_name ] + ceph config set mgr mgr/volumes/max_concurrent_clones + +To check the status of a clone operation use: + +.. prompt:: bash $ + + ceph fs clone status [--group_name ] A clone can be in one of the following states: -#. `pending` : Clone operation has not started -#. `in-progress` : Clone operation is in progress -#. `complete` : Clone operation has successfully finished -#. `failed` : Clone operation has failed -#. `canceled` : Clone operation is cancelled by user +#. ``pending`` : Clone operation has not started +#. ``in-progress`` : Clone operation is in progress +#. ``complete`` : Clone operation has successfully finished +#. ``failed`` : Clone operation has failed +#. ``canceled`` : Clone operation is cancelled by user The reason for a clone failure is shown as below: -#. `errno` : error number -#. `error_msg` : failure error string +#. ``errno`` : error number +#. ``error_msg`` : failure error string -Sample output of an `in-progress` clone operation:: +Here is an example of an ``in-progress`` clone:: $ ceph fs subvolume snapshot clone cephfs subvol1 snap1 clone1 $ ceph fs clone status cephfs clone1 @@ -522,9 +655,10 @@ } } -.. note:: The `failure` section will be shown only if the clone is in failed or cancelled state +.. note:: The ``failure`` section will be shown only if the clone's state is + ``failed`` or ``cancelled`` -Sample output of a `failed` clone operation:: +Here is an example of a ``failed`` clone:: $ ceph fs subvolume snapshot clone cephfs subvol1 snap1 clone1 $ ceph fs clone status cephfs clone1 @@ -544,11 +678,13 @@ } } -(NOTE: since `subvol1` is in default group, `source` section in `clone status` does not include group name) +(NOTE: since ``subvol1`` is in the default group, the ``source`` object's +``clone status`` does not include the group name) -.. note:: Cloned subvolumes are accessible only after the clone operation has successfully completed. +.. note:: Cloned subvolumes are accessible only after the clone operation has + successfully completed. -For a successful clone operation, `clone status` would look like so:: +After a successful clone operation, ``clone status`` will look like the below:: $ ceph fs clone status cephfs clone1 { @@ -557,37 +693,50 @@ } } -or `failed` state when clone is unsuccessful. +If a clone operation is unsuccessful, the ``state`` value will be ``failed``. -On failure of a clone operation, the partial clone needs to be deleted and the clone operation needs to be retriggered. -To delete a partial clone use:: +To retry a failed clone operation, the incomplete clone must be deleted and the +clone operation must be issued again. To delete a partial clone use: - $ ceph fs subvolume rm [--group_name ] --force +.. prompt:: bash $ -.. note:: Cloning only synchronizes directories, regular files and symbolic links. Also, inode timestamps (access and - modification times) are synchronized upto seconds granularity. + ceph fs subvolume rm [--group_name ] --force -An `in-progress` or a `pending` clone operation can be canceled. To cancel a clone operation use the `clone cancel` command:: +.. note:: Cloning synchronizes only directories, regular files and symbolic + links. Inode timestamps (access and modification times) are synchronized up + to seconds granularity. - $ ceph fs clone cancel [--group_name ] +An ``in-progress`` or a ``pending`` clone operation may be canceled. To cancel +a clone operation use the ``clone cancel`` command: -On successful cancelation, the cloned subvolume is moved to `canceled` state:: +.. prompt:: bash $ - $ ceph fs subvolume snapshot clone cephfs subvol1 snap1 clone1 - $ ceph fs clone cancel cephfs clone1 - $ ceph fs clone status cephfs clone1 - { - "status": { - "state": "canceled", - "source": { - "volume": "cephfs", - "subvolume": "subvol1", - "snapshot": "snap1" - } + ceph fs clone cancel [--group_name ] + +On successful cancellation, the cloned subvolume is moved to the ``canceled`` state: + +.. prompt:: bash # + + ceph fs subvolume snapshot clone cephfs subvol1 snap1 clone1 + ceph fs clone cancel cephfs clone1 + ceph fs clone status cephfs clone1 + +:: + + { + "status": { + "state": "canceled", + "source": { + "volume": "cephfs", + "subvolume": "subvol1", + "snapshot": "snap1" + } + } } } -.. note:: The canceled cloned can be deleted by using --force option in `fs subvolume rm` command. +.. note:: The canceled cloned may be deleted by supplying the ``--force`` + option to the `fs subvolume rm` command. .. _subvol-pinning: @@ -595,28 +744,33 @@ Pinning Subvolumes and Subvolume Groups --------------------------------------- - -Subvolumes and subvolume groups can be automatically pinned to ranks according -to policies. This can help distribute load across MDS ranks in predictable and +Subvolumes and subvolume groups may be automatically pinned to ranks according +to policies. This can distribute load across MDS ranks in predictable and stable ways. Review :ref:`cephfs-pinning` and :ref:`cephfs-ephemeral-pinning` for details on how pinning works. -Pinning is configured by:: +Pinning is configured by: + +.. prompt:: bash $ - $ ceph fs subvolumegroup pin + ceph fs subvolumegroup pin -or for subvolumes:: +or for subvolumes: - $ ceph fs subvolume pin +.. prompt:: bash $ + + ceph fs subvolume pin Typically you will want to set subvolume group pins. The ``pin_type`` may be one of ``export``, ``distributed``, or ``random``. The ``pin_setting`` corresponds to the extended attributed "value" as in the pinning documentation referenced above. -So, for example, setting a distributed pinning strategy on a subvolume group:: +So, for example, setting a distributed pinning strategy on a subvolume group: + +.. prompt:: bash $ - $ ceph fs subvolumegroup pin cephfilesystem-a csi distributed 1 + ceph fs subvolumegroup pin cephfilesystem-a csi distributed 1 Will enable distributed subtree partitioning policy for the "csi" subvolume group. This will cause every subvolume within the group to be automatically diff -Nru ceph-16.2.11+ds/doc/cephfs/health-messages.rst ceph-16.2.15+ds/doc/cephfs/health-messages.rst --- ceph-16.2.11+ds/doc/cephfs/health-messages.rst 2023-01-24 20:43:13.000000000 +0000 +++ ceph-16.2.15+ds/doc/cephfs/health-messages.rst 2024-02-26 19:21:09.000000000 +0000 @@ -123,7 +123,9 @@ from properly cleaning up resources used by client requests. This message appears if a client appears to have more than ``max_completed_requests`` (default 100000) requests that are complete on the MDS side but haven't - yet been accounted for in the client's *oldest tid* value. + yet been accounted for in the client's *oldest tid* value. The last tid + used by the MDS to trim completed client requests (or flush) is included + as part of `session ls` (or `client ls`) command as a debug aid. * ``MDS_DAMAGE`` Message @@ -168,3 +170,15 @@ the actual cache size (in memory) is at least 50% greater than ``mds_cache_memory_limit`` (default 1GB). Modify ``mds_health_cache_threshold`` to set the warning ratio. + +* ``MDS_CLIENTS_LAGGY`` + + Message + "Client *ID* is laggy; not evicted because some OSD(s) is/are laggy" + + Description + If OSD(s) is laggy (due to certain conditions like network cut-off, etc) + then it might make clients laggy(session might get idle or cannot flush + dirty data for cap revokes). If ``defer_client_eviction_on_laggy_osds`` is + set to true (default true), client eviction will not take place and thus + this health warning will be generated. diff -Nru ceph-16.2.11+ds/doc/cephfs/mds-config-ref.rst ceph-16.2.15+ds/doc/cephfs/mds-config-ref.rst --- ceph-16.2.11+ds/doc/cephfs/mds-config-ref.rst 2023-01-24 20:43:13.000000000 +0000 +++ ceph-16.2.15+ds/doc/cephfs/mds-config-ref.rst 2024-02-26 19:21:09.000000000 +0000 @@ -501,6 +501,25 @@ :Type: 32-bit Integer :Default: ``0`` +``mds_inject_skip_replaying_inotable`` + +:Description: Ceph will skip replaying the inotable when replaying the journal, + and the premary MDS will crash, while the replacing MDS won't. + (for developers only). + +:Type: Boolean +:Default: ``false`` + + +``mds_kill_skip_replaying_inotable`` + +:Description: Ceph will skip replaying the inotable when replaying the journal, + and the premary MDS will crash, while the replacing MDS won't. + (for developers only). + +:Type: Boolean +:Default: ``false`` + ``mds_wipe_sessions`` diff -Nru ceph-16.2.11+ds/doc/cephfs/mount-using-fuse.rst ceph-16.2.15+ds/doc/cephfs/mount-using-fuse.rst --- ceph-16.2.11+ds/doc/cephfs/mount-using-fuse.rst 2023-01-24 20:43:13.000000000 +0000 +++ ceph-16.2.15+ds/doc/cephfs/mount-using-fuse.rst 2024-02-26 19:21:09.000000000 +0000 @@ -53,7 +53,8 @@ ceph-fuse --id foo --client_fs mycephfs2 /mnt/mycephfs2 -You may also add a ``client_fs`` setting to your ``ceph.conf`` +You may also add a ``client_fs`` setting to your ``ceph.conf``. Alternatively, the option +``--client_mds_namespace`` is supported for backward compatibility. Unmounting CephFS ================= diff -Nru ceph-16.2.11+ds/doc/cephfs/mount-using-kernel-driver.rst ceph-16.2.15+ds/doc/cephfs/mount-using-kernel-driver.rst --- ceph-16.2.11+ds/doc/cephfs/mount-using-kernel-driver.rst 2023-01-24 20:43:13.000000000 +0000 +++ ceph-16.2.15+ds/doc/cephfs/mount-using-kernel-driver.rst 2024-02-26 19:21:09.000000000 +0000 @@ -96,6 +96,28 @@ mount -t ceph :/ /mnt/mycephfs2 -o name=fs,fs=mycephfs2 +Backward Compatibility +====================== +The old syntax is supported for backward compatibility. + +To mount CephFS with the kernel driver:: + + mkdir /mnt/mycephfs + mount -t ceph :/ /mnt/mycephfs -o name=admin + +The key-value argument right after option ``-o`` is CephX credential; +``name`` is the username of the CephX user we are using to mount CephFS. + +To mount a non-default FS ``cephfs2``, in case the cluster has multiple FSs:: + + mount -t ceph :/ /mnt/mycephfs -o name=admin,fs=cephfs2 + + or + + mount -t ceph :/ /mnt/mycephfs -o name=admin,mds_namespace=cephfs2 + +.. note:: The option ``mds_namespace`` is deprecated. Use ``fs=`` instead when using the old syntax for mounting. + Unmounting CephFS ================= To unmount the Ceph file system, use the ``umount`` command as usual:: diff -Nru ceph-16.2.11+ds/doc/cephfs/nfs.rst ceph-16.2.15+ds/doc/cephfs/nfs.rst --- ceph-16.2.11+ds/doc/cephfs/nfs.rst 2023-01-24 20:43:13.000000000 +0000 +++ ceph-16.2.15+ds/doc/cephfs/nfs.rst 2024-02-26 19:21:09.000000000 +0000 @@ -60,6 +60,18 @@ - enable read delegations (need at least v13.0.1 'libcephfs2' package and v2.6.0 stable 'nfs-ganesha' and 'nfs-ganesha-ceph' packages) +.. important:: + + Under certain conditions, NFS access using the CephFS FSAL fails. This + causes an error to be thrown that reads "Input/output error". Under these + circumstances, the application metadata must be set for the CephFS metadata + and CephFS data pools. Do this by running the following command: + + .. prompt:: bash $ + + ceph osd pool application set cephfs cephfs + + Configuration for libcephfs clients ----------------------------------- diff -Nru ceph-16.2.11+ds/doc/cephfs/quota.rst ceph-16.2.15+ds/doc/cephfs/quota.rst --- ceph-16.2.11+ds/doc/cephfs/quota.rst 2023-01-24 20:43:13.000000000 +0000 +++ ceph-16.2.15+ds/doc/cephfs/quota.rst 2024-02-26 19:21:09.000000000 +0000 @@ -5,6 +5,60 @@ quota can restrict the number of *bytes* or the number of *files* stored beneath that point in the directory hierarchy. +Like most other things in CephFS, quotas are configured using virtual +extended attributes: + + * ``ceph.quota.max_files`` -- file limit + * ``ceph.quota.max_bytes`` -- byte limit + +If the extended attributes appear on a directory that means a quota is +configured there. If they are not present then no quota is set on that +directory (although one may still be configured on a parent directory). + +To set a quota, set the extended attribute on a CephFS directory with a +value:: + + setfattr -n ceph.quota.max_bytes -v 100000000 /some/dir # 100 MB + setfattr -n ceph.quota.max_files -v 10000 /some/dir # 10,000 files + +To view quota limit:: + + $ getfattr -n ceph.quota.max_bytes /some/dir + # file: dir1/ + ceph.quota.max_bytes="100000000" + $ + $ getfattr -n ceph.quota.max_files /some/dir + # file: dir1/ + ceph.quota.max_files="10000" + +.. note:: Running ``getfattr /some/dir -d -m -`` for a CephFS directory will + print none of the CephFS extended attributes. This is because the CephFS + kernel and FUSE clients hide this information from the ``listxattr(2)`` + system call. Instead, a specific CephFS extended attribute can be viewed by + running ``getfattr /some/dir -n ceph.``. + +To remove a quota, set the value of extended attribute to ``0``:: + + $ setfattr -n ceph.quota.max_bytes -v 0 /some/dir + $ getfattr /some/dir -n ceph.quota.max_bytes + dir1/: ceph.quota.max_bytes: No such attribute + $ + $ setfattr -n ceph.quota.max_files -v 0 /some/dir + $ getfattr dir1/ -n ceph.quota.max_files + dir1/: ceph.quota.max_files: No such attribute + +Space Usage Reporting and CephFS Quotas +--------------------------------------- +When the root directory of the CephFS mount has quota set on it, the available +space on the CephFS reported by space usage report tools (like ``df``) is +based on quota limit. That is, ``available space = quota limit - used space`` +instead of ``available space = total space - used space``. + +This behaviour can be disabled by setting following option in client section +of ``ceph.conf``:: + + client quota df = false + Limitations ----------- @@ -85,3 +139,11 @@ setfattr -n ceph.quota.max_bytes -v 0 /some/dir setfattr -n ceph.quota.max_files -v 0 /some/dir + + +.. note:: In cases where CephFS extended attributes are set on a CephFS + directory (for example, ``/some/dir``), running ``getfattr /some/dir -d -m + -`` will not print those CephFS extended attributes. This is because CephFS + kernel and FUSE clients hide this information from the ``listxattr(2)`` + system call. You can access a specific CephFS extended attribute by running + ``getfattr /some/dir -n ceph.`` instead. diff -Nru ceph-16.2.11+ds/doc/cephfs/scrub.rst ceph-16.2.15+ds/doc/cephfs/scrub.rst --- ceph-16.2.11+ds/doc/cephfs/scrub.rst 2023-01-24 20:43:13.000000000 +0000 +++ ceph-16.2.15+ds/doc/cephfs/scrub.rst 2024-02-26 19:21:09.000000000 +0000 @@ -131,3 +131,26 @@ { "return_code": 0 } + +Damages +======= + +The types of damage that can be reported and repaired by File System Scrub are: + +* DENTRY : Inode's dentry is missing. + +* DIR_FRAG : Inode's directory fragment(s) is missing. + +* BACKTRACE : Inode's backtrace in the data pool is corrupted. + +Evaluate strays using recursive scrub +===================================== + +- In order to evaluate strays i.e. purge stray directories in ``~mdsdir`` use the following command:: + + ceph tell mds.:0 scrub start ~mdsdir recursive + +- ``~mdsdir`` is not enqueued by default when scrubbing at the CephFS root. In order to perform stray evaluation + at root, run scrub with flags ``scrub_mdsdir`` and ``recursive``:: + + ceph tell mds.:0 scrub start / recursive,scrub_mdsdir diff -Nru ceph-16.2.11+ds/doc/cephfs/snap-schedule.rst ceph-16.2.15+ds/doc/cephfs/snap-schedule.rst --- ceph-16.2.11+ds/doc/cephfs/snap-schedule.rst 2023-01-24 20:43:13.000000000 +0000 +++ ceph-16.2.15+ds/doc/cephfs/snap-schedule.rst 2024-02-26 19:21:09.000000000 +0000 @@ -38,6 +38,13 @@ the start time is last midnight. So when a snapshot schedule with repeat interval `1h` is added at 13:50 with the default start time, the first snapshot will be taken at 14:00. +The time zone is assumed to be UTC if none is explicitly included in the string. +An explicit time zone will be mapped to UTC at execution. +The start time must be in ISO8601 format. Examples below: + +UTC: 2022-08-08T05:30:00 i.e. 5:30 AM UTC, without explicit time zone offset +IDT: 2022-08-08T09:00:00+03:00 i.e. 6:00 AM UTC +EDT: 2022-08-08T05:30:00-04:00 i.e. 9:30 AM UTC Retention specifications are identified by path and the retention spec itself. A retention spec consists of either a number and a time period separated by a @@ -142,6 +149,24 @@ ceph fs snap-schedule retention add / 24h4w # add 24 hourly and 4 weekly to retention ceph fs snap-schedule retention remove / 7d4w # remove 7 daily and 4 weekly, leaves 24 hourly +.. note: When adding a path to snap-schedule, remember to strip off the mount + point path prefix. Paths to snap-schedule should start at the appropriate + CephFS file system root and not at the host file system root. + e.g. if the Ceph File System is mounted at ``/mnt`` and the path under which + snapshots need to be taken is ``/mnt/some/path`` then the acutal path required + by snap-schedule is only ``/some/path``. + +.. note: It should be noted that the "created" field in the snap-schedule status + command output is the timestamp at which the schedule was created. The "created" + timestamp has nothing to do with the creation of actual snapshots. The actual + snapshot creation is accounted for in the "created_count" field, which is a + cumulative count of the total number of snapshots created so far. + +.. note: The maximum number of snapshots to retain per directory is limited by the + config tunable `mds_max_snaps_per_dir`. This tunable defaults to 100. + To ensure a new snapshot can be created, one snapshot less than this will be + retained. So by default, a maximum of 99 snapshots will be retained. + Active and inactive schedules ----------------------------- Snapshot schedules can be added for a path that doesn't exist yet in the diff -Nru ceph-16.2.11+ds/doc/cephfs/troubleshooting.rst ceph-16.2.15+ds/doc/cephfs/troubleshooting.rst --- ceph-16.2.11+ds/doc/cephfs/troubleshooting.rst 2023-01-24 20:43:13.000000000 +0000 +++ ceph-16.2.15+ds/doc/cephfs/troubleshooting.rst 2024-02-26 19:21:09.000000000 +0000 @@ -188,6 +188,98 @@ Please see: https://github.com/ceph/ceph/blob/master/src/script/kcon_all.sh +In-memory Log Dump +================== + +In-memory logs can be dumped by setting ``mds_extraordinary_events_dump_interval`` +during a lower level debugging (log level < 10). ``mds_extraordinary_events_dump_interval`` +is the interval in seconds for dumping the recent in-memory logs when there is an Extra-Ordinary event. + +The Extra-Ordinary events are classified as: + +* Client Eviction +* Missed Beacon ACK from the monitors +* Missed Internal Heartbeats + +In-memory Log Dump is disabled by default to prevent log file bloat in a production environment. +The below commands consecutively enables it:: + + $ ceph config set mds debug_mds / + $ ceph config set mds mds_extraordinary_events_dump_interval + +The ``log_level`` should be < 10 and ``gather_level`` should be >= 10 to enable in-memory log dump. +When it is enabled, the MDS checks for the extra-ordinary events every +``mds_extraordinary_events_dump_interval`` seconds and if any of them occurs, MDS dumps the +in-memory logs containing the relevant event details in ceph-mds log. + +.. note:: For higher log levels (log_level >= 10) there is no reason to dump the In-memory Logs and a + lower gather level (gather_level < 10) is insufficient to gather In-memory Logs. Thus a + log level >=10 or a gather level < 10 in debug_mds would prevent enabling the In-memory Log Dump. + In such cases, when there is a failure it's required to reset the value of + mds_extraordinary_events_dump_interval to 0 before enabling using the above commands. + +The In-memory Log Dump can be disabled using:: + + $ ceph config set mds mds_extraordinary_events_dump_interval 0 + +Filesystems Become Inaccessible After an Upgrade +================================================ + +.. note:: + You can avoid ``operation not permitted`` errors by running this procedure + before an upgrade. As of May 2023, it seems that ``operation not permitted`` + errors of the kind discussed here occur after upgrades after Nautilus + (inclusive). + +IF + +you have CephFS file systems that have data and metadata pools that were +created by a ``ceph fs new`` command (meaning that they were not created +with the defaults) + +OR + +you have an existing CephFS file system and are upgrading to a new post-Nautilus +major version of Ceph + +THEN + +in order for the documented ``ceph fs authorize...`` commands to function as +documented (and to avoid 'operation not permitted' errors when doing file I/O +or similar security-related problems for all users except the ``client.admin`` +user), you must first run: + +.. prompt:: bash $ + + ceph osd pool application set cephfs metadata + +and + +.. prompt:: bash $ + + ceph osd pool application set cephfs data + +Otherwise, when the OSDs receive a request to read or write data (not the +directory info, but file data) they will not know which Ceph file system name +to look up. This is true also of pool names, because the 'defaults' themselves +changed in the major releases, from:: + + data pool=fsname + metadata pool=fsname_metadata + +to:: + + data pool=fsname.data and + metadata pool=fsname.meta + +Any setup that used ``client.admin`` for all mounts did not run into this +problem, because the admin key gave blanket permissions. + +A temporary fix involves changing mount requests to the 'client.admin' user and +its associated key. A less drastic but half-fix is to change the osd cap for +your user to just ``caps osd = "allow rw"`` and delete ``tag cephfs +data=....`` + Reporting Issues ================ diff -Nru ceph-16.2.11+ds/doc/dev/ceph_krb_auth.rst ceph-16.2.15+ds/doc/dev/ceph_krb_auth.rst --- ceph-16.2.11+ds/doc/dev/ceph_krb_auth.rst 2023-01-24 20:43:13.000000000 +0000 +++ ceph-16.2.15+ds/doc/dev/ceph_krb_auth.rst 2024-02-26 19:21:09.000000000 +0000 @@ -554,7 +554,7 @@ ... -6. A new *set parameter* was added in Ceph, ``gss ktab client file`` which +6. A new *set parameter* was added in Ceph, ``gss_ktab_client_file`` which points to the keytab file related to the Ceph node *(or principal)* in question. @@ -614,10 +614,10 @@ /etc/ceph/ceph.conf [global] ... - auth cluster required = gss - auth service required = gss - auth client required = gss - gss ktab client file = /{$my_new_location}/{$my_new_ktab_client_file.keytab} + auth_cluster_required = gss + auth_service_required = gss + auth_client_required = gss + gss_ktab_client_file = /{$my_new_location}/{$my_new_ktab_client_file.keytab} ... diff -Nru ceph-16.2.11+ds/doc/dev/cephadm/developing-cephadm.rst ceph-16.2.15+ds/doc/dev/cephadm/developing-cephadm.rst --- ceph-16.2.11+ds/doc/dev/cephadm/developing-cephadm.rst 2023-01-24 20:43:13.000000000 +0000 +++ ceph-16.2.15+ds/doc/dev/cephadm/developing-cephadm.rst 2024-02-26 19:21:09.000000000 +0000 @@ -32,7 +32,7 @@ for mon or mgr. - You'll see health warnings from cephadm about stray daemons--that's because the vstart-launched daemons aren't controlled by cephadm. -- The default image is ``quay.io/ceph-ci/ceph:master``, but you can change +- The default image is ``quay.io/ceph-ci/ceph:main``, but you can change this by passing ``-o container_image=...`` or ``ceph config set global container_image ...``. diff -Nru ceph-16.2.11+ds/doc/dev/cephfs-mirroring.rst ceph-16.2.15+ds/doc/dev/cephfs-mirroring.rst --- ceph-16.2.11+ds/doc/dev/cephfs-mirroring.rst 2023-01-24 20:43:13.000000000 +0000 +++ ceph-16.2.15+ds/doc/dev/cephfs-mirroring.rst 2024-02-26 19:21:09.000000000 +0000 @@ -2,38 +2,44 @@ CephFS Mirroring ================ -CephFS supports asynchronous replication of snapshots to a remote CephFS file system via -`cephfs-mirror` tool. Snapshots are synchronized by mirroring snapshot data followed by -creating a snapshot with the same name (for a given directory on the remote file system) as -the snapshot being synchronized. +CephFS supports asynchronous replication of snapshots to a remote CephFS file +system via `cephfs-mirror` tool. Snapshots are synchronized by mirroring +snapshot data followed by creating a snapshot with the same name (for a given +directory on the remote file system) as the snapshot being synchronized. Requirements ------------ -The primary (local) and secondary (remote) Ceph clusters version should be Pacific or later. +The primary (local) and secondary (remote) Ceph clusters version should be +Pacific or later. Key Idea -------- -For a given snapshot pair in a directory, `cephfs-mirror` daemon will rely on readdir diff -to identify changes in a directory tree. The diffs are applied to directory in the remote -file system thereby only synchronizing files that have changed between two snapshots. +For a given snapshot pair in a directory, `cephfs-mirror` daemon will rely on +readdir diff to identify changes in a directory tree. The diffs are applied to +directory in the remote file system thereby only synchronizing files that have +changed between two snapshots. This feature is tracked here: https://tracker.ceph.com/issues/47034. -Currently, snapshot data is synchronized by bulk copying to the remote filesystem. +Currently, snapshot data is synchronized by bulk copying to the remote +filesystem. -.. note:: Synchronizing hardlinks is not supported -- hardlinked files get synchronized - as separate files. +.. note:: Synchronizing hardlinks is not supported -- hardlinked files get + synchronized as separate files. Creating Users -------------- -Start by creating a user (on the primary/local cluster) for the mirror daemon. This user -requires write capability on the metadata pool to create RADOS objects (index objects) -for watch/notify operation and read capability on the data pool(s). +Start by creating a user (on the primary/local cluster) for the mirror daemon. +This user requires write capability on the metadata pool to create RADOS +objects (index objects) for watch/notify operation and read capability on the +data pool(s). - $ ceph auth get-or-create client.mirror mon 'profile cephfs-mirror' mds 'allow r' osd 'allow rw tag cephfs metadata=*, allow r tag cephfs data=*' mgr 'allow r' +.. prompt:: bash $ + + ceph auth get-or-create client.mirror mon 'profile cephfs-mirror' mds 'allow r' osd 'allow rw tag cephfs metadata=*, allow r tag cephfs data=*' mgr 'allow r' Create a user for each file system peer (on the secondary/remote cluster). This user needs to have full capabilities on the MDS (to take snapshots) and the OSDs:: diff -Nru ceph-16.2.11+ds/doc/dev/cephfs-snapshots.rst ceph-16.2.15+ds/doc/dev/cephfs-snapshots.rst --- ceph-16.2.11+ds/doc/dev/cephfs-snapshots.rst 2023-01-24 20:43:13.000000000 +0000 +++ ceph-16.2.15+ds/doc/dev/cephfs-snapshots.rst 2024-02-26 19:21:09.000000000 +0000 @@ -131,3 +131,8 @@ deleting one will result in missing file data for others. (This may even be invisible, not throwing errors to the user.) If each FS gets its own pool things probably work, but this isn't tested and may not be true. + +.. Note:: To avoid snap id collision between mon-managed snapshots and file system + snapshots, pools with mon-managed snapshots are not allowed to be attached + to a file system. Also, mon-managed snapshots can't be created in pools + already attached to a file system either. diff -Nru ceph-16.2.11+ds/doc/dev/developer_guide/basic-workflow.rst ceph-16.2.15+ds/doc/dev/developer_guide/basic-workflow.rst --- ceph-16.2.11+ds/doc/dev/developer_guide/basic-workflow.rst 2023-01-24 20:43:13.000000000 +0000 +++ ceph-16.2.15+ds/doc/dev/developer_guide/basic-workflow.rst 2024-02-26 19:21:09.000000000 +0000 @@ -87,7 +87,7 @@ #. :ref:`Push the changes in your local working copy to your fork`. -#. Create a Pull Request to push the change upstream +#. Create a Pull Request to push the change upstream. #. Create a Pull Request that asks for your changes to be added into the "upstream Ceph" repository. @@ -513,3 +513,57 @@ client: add timer_lock support Reviewed-by: Patrick Donnelly +Miscellaneous +------------- + +--set-upstream +^^^^^^^^^^^^^^ + +If you forget to include the ``--set-upstream origin x`` option in your ``git +push`` command, you will see the following error message: + +:: + + fatal: The current branch {x} has no upstream branch. + To push the current branch and set the remote as upstream, use + git push --set-upstream origin {x} + +To set up git to automatically create the upstream branch that corresponds to +the branch in your local working copy, run this command from within the +``ceph/`` directory: + +.. prompt:: bash $ + + git config --global push.autoSetupRemote true + +Deleting a Branch Locally +^^^^^^^^^^^^^^^^^^^^^^^^^ + +To delete the branch named ``localBranchName`` from the local working copy, run +a command of this form: + +.. prompt:: bash $ + + git branch -d localBranchName + +Deleting a Branch Remotely +^^^^^^^^^^^^^^^^^^^^^^^^^^ + +To delete the branch named ``remoteBranchName`` from the remote upstream branch +(which is also your fork of ``ceph/ceph``, as described in :ref:`forking`), run +a command of this form: + +.. prompt:: bash $ + + git push origin --delete remoteBranchName + +Searching a File Longitudinally for a String +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +To search for the commit that introduced a given string (in this example, that +string is ``foo``) into a given file (in this example, that file is +``file.rst``), run a command of this form: + +.. prompt:: bash $ + + git log -S 'foo' file.rst diff -Nru ceph-16.2.11+ds/doc/dev/developer_guide/essentials.rst ceph-16.2.15+ds/doc/dev/developer_guide/essentials.rst --- ceph-16.2.11+ds/doc/dev/developer_guide/essentials.rst 2023-01-24 20:43:13.000000000 +0000 +++ ceph-16.2.15+ds/doc/dev/developer_guide/essentials.rst 2024-02-26 19:21:09.000000000 +0000 @@ -89,6 +89,11 @@ .. _`jump to the Ceph project`: http://tracker.ceph.com/projects/ceph .. _`New issue`: http://tracker.ceph.com/projects/ceph/issues/new +Slack +----- + +Ceph's Slack is https://ceph-storage.slack.com/. + .. _mailing-list: Mailing lists diff -Nru ceph-16.2.11+ds/doc/dev/developer_guide/tests-integration-tests.rst ceph-16.2.15+ds/doc/dev/developer_guide/tests-integration-tests.rst --- ceph-16.2.11+ds/doc/dev/developer_guide/tests-integration-tests.rst 2023-01-24 20:43:13.000000000 +0000 +++ ceph-16.2.15+ds/doc/dev/developer_guide/tests-integration-tests.rst 2024-02-26 19:21:09.000000000 +0000 @@ -129,8 +129,8 @@ verify that teuthology can run integration tests, with and without OpenStack `upgrade `_ - for various versions of Ceph, verify that upgrades can happen - without disrupting an ongoing workload + for various versions of Ceph, verify that upgrades can happen without + disrupting an ongoing workload (`Upgrade Testing`_) .. _`ceph-deploy man page`: ../../man/8/ceph-deploy @@ -452,6 +452,82 @@ --suite rbd/thrash \ --filter 'rbd/thrash/{clusters/fixed-2.yaml clusters/openstack.yaml workloads/rbd_api_tests_copy_on_read.yaml}' +.. _upgrade-testing: + +Upgrade Testing +^^^^^^^^^^^^^^^ + +Using the upgrade suite we are able to verify that upgrades from earlier releases can complete +successfully without disrupting any ongoing workload. +Each Release branch upgrade directory includes 2-x upgrade testing. +Meaning, we are able to test the upgrade from 2 preceding releases to the current one. +The upgrade sequence is done in `parallel `_ +with other given workloads. + +For instance, the upgrade test directory from the Quincy release branch is as follows: + +.. code-block:: none + + ├── octopus-x + └── pacific-x + +It is possible to test upgrades from Octopus (2-x) or from Pacific (1-x) to Quincy (x). +A simple upgrade test consists the following order: + +.. code-block:: none + + ├── 0-start.yaml + ├── 1-tasks.yaml + ├── upgrade-sequence.yaml + └── workload + +After starting the cluster with the older release we begin running the given ``workload`` +and the ``upgrade-sequnce`` in parallel. + +.. code-block:: yaml + + - print: "**** done start parallel" + - parallel: + - workload + - upgrade-sequence + - print: "**** done end parallel" + +While the ``workload`` directory consists regular yaml files just as in any other suite, +the ``upgrade-sequnce`` is resposible for running the upgrade and awaitng its completion: + +.. code-block:: yaml + + - print: "**** done start upgrade, wait" + ... + mon.a: + - ceph orch upgrade start --image quay.ceph.io/ceph-ci/ceph:$sha1 + - while ceph orch upgrade status | jq '.in_progress' | grep true ; do ceph orch ps ; ceph versions ; sleep 30 ; done\ + ... + - print: "**** done end upgrade, wait..." + +It is also possible to upgrade in stages while running workloads in between those: + +.. code-block:: none + + ├── % + ├── 0-cluster + ├── 1-ceph-install + ├── 2-partial-upgrade + ├── 3-thrash + ├── 4-workload + ├── 5-finish-upgrade.yaml + ├── 6-quincy.yaml + └── 8-final-workload + +After starting a cluster we upgrade only 2/3 of the cluster +(``2-partial-upgrade``). The next stage is running thrash tests and given +workload tests. Later on, continuing to upgrade the rest of the cluster +(``5-finish-upgrade.yaml``). + +The last stage is requiring the updated release (``ceph require-osd-release +quincy``, ``ceph osd set-require-min-compat-client quincy``) and running the +``final-workload``. + Filtering tests by their description ------------------------------------ diff -Nru ceph-16.2.11+ds/doc/dev/network-encoding.rst ceph-16.2.15+ds/doc/dev/network-encoding.rst --- ceph-16.2.11+ds/doc/dev/network-encoding.rst 2023-01-24 20:43:13.000000000 +0000 +++ ceph-16.2.15+ds/doc/dev/network-encoding.rst 2024-02-26 19:21:09.000000000 +0000 @@ -87,7 +87,8 @@ T element[present? 1 : 0]; // Only if present is non-zero. } -Optionals are used to encode ``boost::optional``. +Optionals are used to encode ``boost::optional`` and, since introducing +C++17 to Ceph, ``std::optional``. Pair ---- diff -Nru ceph-16.2.11+ds/doc/dev/osd_internals/erasure_coding/jerasure.rst ceph-16.2.15+ds/doc/dev/osd_internals/erasure_coding/jerasure.rst --- ceph-16.2.11+ds/doc/dev/osd_internals/erasure_coding/jerasure.rst 2023-01-24 20:43:13.000000000 +0000 +++ ceph-16.2.15+ds/doc/dev/osd_internals/erasure_coding/jerasure.rst 2024-02-26 19:21:09.000000000 +0000 @@ -5,7 +5,7 @@ Introduction ------------ -The parameters interpreted by the jerasure plugin are: +The parameters interpreted by the ``jerasure`` plugin are: :: @@ -31,3 +31,5 @@ `http://jerasure.org/jerasure/gf-complete `_ . The difference between the two, if any, should match pull requests against upstream. +Note that as of 2023, the ``jerasure.org`` web site may no longer be +legitimate and/or associated with the original project. diff -Nru ceph-16.2.11+ds/doc/dev/osd_internals/past_intervals.rst ceph-16.2.15+ds/doc/dev/osd_internals/past_intervals.rst --- ceph-16.2.11+ds/doc/dev/osd_internals/past_intervals.rst 1970-01-01 00:00:00.000000000 +0000 +++ ceph-16.2.15+ds/doc/dev/osd_internals/past_intervals.rst 2024-02-26 19:21:09.000000000 +0000 @@ -0,0 +1,93 @@ +============= +PastIntervals +============= + +Purpose +------- + +There are two situations where we need to consider the set of all acting-set +OSDs for a PG back to some epoch ``e``: + + * During peering, we need to consider the acting set for every epoch back to + ``last_epoch_started``, the last epoch in which the PG completed peering and + became active. + (see :doc:`/dev/osd_internals/last_epoch_started` for a detailed explanation) + * During recovery, we need to consider the acting set for every epoch back to + ``last_epoch_clean``, the last epoch at which all of the OSDs in the acting + set were fully recovered, and the acting set was full. + +For either of these purposes, we could build such a set by iterating backwards +from the current OSDMap to the relevant epoch. Instead, we maintain a structure +PastIntervals for each PG. + +An ``interval`` is a contiguous sequence of OSDMap epochs where the PG mapping +didn't change. This includes changes to the acting set, the up set, the +primary, and several other parameters fully spelled out in +PastIntervals::check_new_interval. + +Maintenance and Trimming +------------------------ + +The PastIntervals structure stores a record for each ``interval`` back to +last_epoch_clean. On each new ``interval`` (See AdvMap reactions, +PeeringState::should_restart_peering, and PeeringState::start_peering_interval) +each OSD with the PG will add the new ``interval`` to its local PastIntervals. +Activation messages to OSDs which do not already have the PG contain the +sender's PastIntervals so that the recipient needn't rebuild it. (See +PeeringState::activate needs_past_intervals). + +PastIntervals are trimmed in two places. First, when the primary marks the +PG clean, it clears its past_intervals instance +(PeeringState::try_mark_clean()). The replicas will do the same thing when +they receive the info (See PeeringState::update_history). + +The second, more complex, case is in PeeringState::start_peering_interval. In +the event of a "map gap", we assume that the PG actually has gone clean, but we +haven't received a pg_info_t with the updated ``last_epoch_clean`` value yet. +To explain this behavior, we need to discuss OSDMap trimming. + +OSDMap Trimming +--------------- + +OSDMaps are created by the Monitor quorum and gossiped out to the OSDs. The +Monitor cluster also determines when OSDs (and the Monitors) are allowed to +trim old OSDMap epochs. For the reasons explained above in this document, the +primary constraint is that we must retain all OSDMaps back to some epoch such +that all PGs have been clean at that or a later epoch (min_last_epoch_clean). +(See OSDMonitor::get_trim_to). + +The Monitor quorum determines min_last_epoch_clean through MOSDBeacon messages +sent periodically by each OSDs. Each message contains a set of PGs for which +the OSD is primary at that moment as well as the min_last_epoch_clean across +that set. The Monitors track these values in OSDMonitor::last_epoch_clean. + +There is a subtlety in the min_last_epoch_clean value used by the OSD to +populate the MOSDBeacon. OSD::collect_pg_stats invokes PG::with_pg_stats to +obtain the lec value, which actually uses +pg_stat_t::get_effective_last_epoch_clean() rather than +info.history.last_epoch_clean. If the PG is currently clean, +pg_stat_t::get_effective_last_epoch_clean() is the current epoch rather than +last_epoch_clean -- this works because the PG is clean at that epoch and it +allows OSDMaps to be trimmed during periods where OSDMaps are being created +(due to snapshot activity, perhaps), but no PGs are undergoing ``interval`` +changes. + +Back to PastIntervals +--------------------- + +We can now understand our second trimming case above. If OSDMaps have been +trimmed up to epoch ``e``, we know that the PG must have been clean at some epoch +>= ``e`` (indeed, **all** PGs must have been), so we can drop our PastIntevals. + +This dependency also pops up in PeeringState::check_past_interval_bounds(). +PeeringState::get_required_past_interval_bounds takes as a parameter +oldest_epoch, which comes from OSDSuperblock::cluster_osdmap_trim_lower_bound. +We use cluster_osdmap_trim_lower_bound rather than a specific osd's oldest_map +because we don't necessarily trim all MOSDMap::cluster_osdmap_trim_lower_bound. +In order to avoid doing too much work at once we limit the amount of osdmaps +trimmed using ``osd_target_transaction_size`` in OSD::trim_maps(). +For this reason, a specific OSD's oldest_map can lag behind +OSDSuperblock::cluster_osdmap_trim_lower_bound +for a while. + +See https://tracker.ceph.com/issues/49689 for an example. diff -Nru ceph-16.2.11+ds/doc/glossary.rst ceph-16.2.15+ds/doc/glossary.rst --- ceph-16.2.11+ds/doc/glossary.rst 2023-01-24 20:43:13.000000000 +0000 +++ ceph-16.2.15+ds/doc/glossary.rst 2024-02-26 19:21:09.000000000 +0000 @@ -4,16 +4,38 @@ .. glossary:: + Application + More properly called a :term:`client`, an application is any program + external to Ceph that uses a Ceph Cluster to store and + replicate data. + :ref:`BlueStore` OSD BlueStore is a storage back end used by OSD daemons, and was designed specifically for use with Ceph. BlueStore was - introduced in the Ceph Kraken release. In the Ceph Luminous - release, BlueStore became Ceph's default storage back end, - supplanting FileStore. Unlike :term:`filestore`, BlueStore - stores objects directly on Ceph block devices without any file - system interface. Since Luminous (12.2), BlueStore has been - Ceph's default and recommended storage back end. + introduced in the Ceph Kraken release. The Luminous release of + Ceph promoted BlueStore to the default OSD back end, + supplanting FileStore. As of the Reef release, FileStore is no + longer available as a storage backend. + + BlueStore stores objects directly on Ceph block devices without + a mounted file system. + + Bucket + In the context of :term:`RGW`, a bucket is a group of objects. + In a filesystem-based analogy in which objects are the + counterpart of files, buckets are the counterpart of + directories. :ref:`Multisite sync + policies` can be set on buckets, + to provide fine-grained control of data movement from one zone + to another zone. + + The concept of the bucket has been taken from AWS S3. See also + `the AWS S3 page on creating buckets `_ + and `the AWS S3 'Buckets Overview' page `_. + OpenStack Swift uses the term "containers" for what RGW and AWS call "buckets". + See `the OpenStack Storage API overview page `_. + Ceph Ceph is a distributed network storage and file system with distributed metadata management and POSIX semantics. @@ -166,9 +188,17 @@ applications, Ceph Users, and :term:`Ceph Client`\s. Ceph Storage Clusters receive data from :term:`Ceph Client`\s. - cephx - The Ceph authentication protocol. Cephx operates like Kerberos, - but it has no single point of failure. + CephX + The Ceph authentication protocol. CephX authenticates users and + daemons. CephX operates like Kerberos, but it has no single + point of failure. See the :ref:`High-availability + Authentication section` + of the Architecture document and the :ref:`CephX Configuration + Reference`. + + Client + A client is any program external to Ceph that uses a Ceph + Cluster to store and replicate data. Cloud Platforms Cloud Stacks @@ -223,6 +253,9 @@ Any single machine or server in a Ceph Cluster. See :term:`Ceph Node`. + Hybrid OSD + Refers to an OSD that has both HDD and SSD drives. + LVM tags Extensible metadata for LVM volumes and groups. It is used to store Ceph-specific information about devices and its @@ -271,6 +304,26 @@ This is the unique identifier of an OSD. This term is used interchangeably with ``fsid`` + Period + In the context of :term:`RGW`, a period is the configuration + state of the :term:`Realm`. The period stores the configuration + state of a multi-site configuration. When the period is updated, + the "epoch" is said thereby to have been changed. + + Placement Groups (PGs) + Placement groups (PGs) are subsets of each logical Ceph pool. + Placement groups perform the function of placing objects (as a + group) into OSDs. Ceph manages data internally at + placement-group granularity: this scales better than would + managing individual (and therefore more numerous) RADOS + objects. A cluster that has a larger number of placement groups + (for example, 100 per OSD) is better balanced than an otherwise + identical cluster with a smaller number of placement groups. + + Ceph's internal RADOS objects are each mapped to a specific + placement group, and each placement group belongs to exactly + one Ceph pool. + :ref:`Pool` A pool is a logical partition used to store objects. @@ -301,6 +354,10 @@ The block storage component of Ceph. Also called "RADOS Block Device" or :term:`Ceph Block Device`. + :ref:`Realm` + In the context of RADOS Gateway (RGW), a realm is a globally + unique namespace that consists of one or more zonegroups. + Releases Ceph Interim Release @@ -335,6 +392,28 @@ Amazon S3 RESTful API and the OpenStack Swift API. Also called "RADOS Gateway" and "Ceph Object Gateway". + scrubs + + The processes by which Ceph ensures data integrity. During the + process of scrubbing, Ceph generates a catalog of all objects + in a placement group, then ensures that none of the objects are + missing or mismatched by comparing each primary object against + its replicas, which are stored across other OSDs. Any PG + is determined to have a copy of an object that is different + than the other copies or is missing entirely is marked + "inconsistent" (that is, the PG is marked "inconsistent"). + + There are two kinds of scrubbing: light scrubbing and deep + scrubbing (also called "normal scrubbing" and "deep scrubbing", + respectively). Light scrubbing is performed daily and does + nothing more than confirm that a given object exists and that + its metadata is correct. Deep scrubbing is performed weekly and + reads the data and uses checksums to ensure data integrity. + + See :ref:`Scrubbing ` in the RADOS OSD + Configuration Reference Guide and page 141 of *Mastering Ceph, + second edition* (Fisk, Nick. 2019). + secrets Secrets are credentials used to perform digital authentication whenever privileged users must access systems that require @@ -352,5 +431,17 @@ Teuthology The collection of software that performs scripted tests on Ceph. + User + An individual or a system actor (for example, an application) + that uses Ceph clients to interact with the :term:`Ceph Storage + Cluster`. See :ref:`User` and :ref:`User + Management`. + + Zone + In the context of :term:`RGW`, a zone is a logical group that + consists of one or more :term:`RGW` instances. A zone's + configuration state is stored in the :term:`period`. See + :ref:`Zones`. + .. _https://github.com/ceph: https://github.com/ceph .. _Cluster Map: ../architecture#cluster-map diff -Nru ceph-16.2.11+ds/doc/images/zone-sync.svg ceph-16.2.15+ds/doc/images/zone-sync.svg --- ceph-16.2.11+ds/doc/images/zone-sync.svg 2023-01-24 20:43:13.000000000 +0000 +++ ceph-16.2.15+ds/doc/images/zone-sync.svg 2024-02-26 19:21:09.000000000 +0000 @@ -2,24 +2,24 @@ + inkscape:export-ydpi="90" + xmlns:inkscape="http://www.inkscape.org/namespaces/inkscape" + xmlns:sodipodi="http://sodipodi.sourceforge.net/DTD/sodipodi-0.dtd" + xmlns:xlink="http://www.w3.org/1999/xlink" + xmlns="http://www.w3.org/2000/svg" + xmlns:svg="http://www.w3.org/2000/svg" + xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" + xmlns:cc="http://creativecommons.org/ns#" + xmlns:dc="http://purl.org/dc/elements/1.1/" + xmlns:osb="http://www.openswatchbook.org/uri/2009/osb"> - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + inkscape:window-width="1920" + inkscape:window-height="1011" + inkscape:window-x="0" + inkscape:window-y="32" + inkscape:window-maximized="1" + inkscape:showpageshadow="2" + inkscape:pagecheckerboard="0" + inkscape:deskcolor="#d1d1d1" /> @@ -16117,1939 +24156,901 @@ image/svg+xml - - - - - - - - + id="layer1" + transform="translate(-147.96869,-187.26927)"> - - - - - - - - - - - - - - - - + id="g93021"> + + + + + + + + - - - - - - - - - - - - - - - - - - - + id="rect82306" + transform="matrix(2.25,0,0,2.25,1741.735,-62.051623)" /> + - - - - US-WEST - - - US-EAST - - - - - - - - - - - - - - - - + id="rect82335" + transform="matrix(2.25,0,0,2.25,1741.735,-62.051623)" /> + + US-WEST + US-EAST + - - - - RADOSGW - - - + id="polygon84308" + transform="matrix(2.25,0,0,2.25,1741.735,-62.051623)" /> + + RADOSGW + - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + style="fill:none;stroke:#5e6a71;stroke-width:1.02682;stroke-miterlimit:10;stroke-dasharray:3.07565, 3.07565" + clip-path="url(#clipPath61825)" /> + + + + + + + + + - AP - APP - - - - - - - - - - - - - - - - - - - - - - - - + y="282.81369" + x="261.76285" + style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:10.8091px;font-family:'DejaVu Sans';-inkscape-font-specification:'DejaVu Sans';fill:#ffffff">P + + - - RE - S - T - - - - - - - - - - - - - - - - + transform="matrix(3.375,0,0,3.375,2035.6783,-371.22344)" + style="fill:none;stroke:#e6e8e8;stroke-width:0.214;stroke-miterlimit:10" + clip-path="url(#clipPath61773)" /> - - - - RADOSGW - - - - - - - - - - - - - - - - - - - - - - - - - - - + id="polygon84339" + transform="matrix(2.25,0,0,2.25,1741.735,-62.051623)" /> + + RADOSGW + + + + + + + + + + + + + + + + + + + + id="path84722" + transform="matrix(2.25,0,0,2.25,1741.735,-62.051623)" /> - - - - - - - - - - - - + id="line84724" + transform="matrix(2.25,0,0,2.25,1741.735,-62.051623)" /> - - - - N - A - TIV - E - - - - - - - - + id="rect84739" + transform="matrix(2.25,0,0,2.25,1741.735,-62.051623)" /> + + id="path84761" + transform="matrix(2.25,0,0,2.25,1741.735,-62.051623)" /> - - - - - - - - - - - - + id="line84763" + transform="matrix(2.25,0,0,2.25,1741.735,-62.051623)" /> - - - - N - A - TIV - E - - - - MASTER REGION - - - - - MASTERZONE - - SECONDARYZONE - - - - - - - - - - - - - + id="rect84778" + transform="matrix(2.25,0,0,2.25,1741.735,-62.051623)" /> + + + MASTER ZONEGROUP + + + + + MASTERZONE + + SECONDARYZONE - - - - - - - - + id="rect82124" + transform="matrix(3.375,0,0,3.375,2190.045,-371.10773)" /> + id="path82134" + transform="matrix(3.375,0,0,3.375,2190.045,-371.10773)" /> + id="path82136" + transform="matrix(3.375,0,0,3.375,2190.045,-371.10773)" /> + id="circle82138" + transform="matrix(3.375,0,0,3.375,2190.045,-371.10773)" /> + id="circle82140" + transform="matrix(3.375,0,0,3.375,2190.045,-371.10773)" /> + id="circle82142" + transform="matrix(3.375,0,0,3.375,2190.045,-371.10773)" /> + id="circle82144" + transform="matrix(3.375,0,0,3.375,2190.045,-371.10773)" /> + id="path82146" + transform="matrix(3.375,0,0,3.375,2190.045,-371.10773)" /> - - - AP - P - - - - - - - - + id="path82148" + transform="matrix(3.375,0,0,3.375,2190.045,-371.10773)" /> + APP + id="path84646" + transform="matrix(3.375,0,0,3.375,2190.045,-371.10773)" /> - - - - - - - - - - - - - - - - - RE - S - T - - DATASYNC - - - MASTER REGION - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + id="line84648" + transform="matrix(3.375,0,0,3.375,2190.045,-371.10773)" /> + + DATASYNC + + MASTER ZONEGROUP + + + + M - - - - - - - - - - + style="font-size:9.75218px;font-family:ApexSans-Medium;fill:#ffffff" + x="411.10547" + y="668.39728" + clip-path="url(#clipPath61617)">M + + M + style="font-size:9.75218px;font-family:ApexSans-Medium;fill:#ffffff" + x="337.69321" + y="694.37891" + clip-path="url(#clipPath61605)">M + y="628.10437" + x="223.6218" + style="fill:#ffffff" + clip-path="url(#clipPath61601)" /> + y="628.10437" + x="223.6218" + style="fill:none;stroke:#81d0db;stroke-width:0.963;stroke-miterlimit:10" + clip-path="url(#clipPath61597)" /> + y="629.62085" + x="225.13145" + style="fill:#81d0db" + clip-path="url(#clipPath61593)" /> + y="631.09906" + x="226.61653" + style="fill:#ffffff" + clip-path="url(#clipPath61589)" /> - - - - - - - - - - - - - + y="633.75629" + x="229.26932" + style="fill:#ffffff" + clip-path="url(#clipPath61585)" /> + - - - - - - - - - - - - - + - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + + + + + M - - - - - - - - - - - - M - + style="font-size:9.75218px;font-family:ApexSans-Medium;fill:#ffffff" + x="391.19333" + y="668.88776" + clip-path="url(#clipPath60769)">M + + M + + READ ONLY + WRITE / READ + + (United States) + + NATIVE + NATIVE + REST + REST + + + + + + + - - READ ONLY - - - WRITE / READ - - - (United States) - diff -Nru ceph-16.2.11+ds/doc/index.rst ceph-16.2.15+ds/doc/index.rst --- ceph-16.2.11+ds/doc/index.rst 2023-01-24 20:43:13.000000000 +0000 +++ ceph-16.2.15+ds/doc/index.rst 2024-02-26 19:21:09.000000000 +0000 @@ -2,8 +2,7 @@ Welcome to Ceph ================= -Ceph uniquely delivers **object, block, and file storage in one unified -system**. +Ceph delivers **object, block, and file storage in one unified system**. .. warning:: @@ -12,6 +11,12 @@ Ceph project. (Click anywhere in this paragraph to read the "Basic Workflow" page of the Ceph Developer Guide.) `. +.. note:: + + :ref:`If you want to make a commit to the documentation but you don't + know how to get started, read the "Documenting Ceph" page. (Click anywhere + in this paragraph to read the "Documenting Ceph" page.) `. + .. raw:: html diff -Nru ceph-16.2.11+ds/doc/install/index.rst ceph-16.2.15+ds/doc/install/index.rst --- ceph-16.2.11+ds/doc/install/index.rst 2023-01-24 20:43:13.000000000 +0000 +++ ceph-16.2.15+ds/doc/install/index.rst 2024-02-26 19:21:09.000000000 +0000 @@ -4,33 +4,32 @@ Installing Ceph =============== -There are several different ways to install Ceph. Choose the -method that best suits your needs. +There are multiple ways to install Ceph. Recommended methods ~~~~~~~~~~~~~~~~~~~ -:ref:`Cephadm ` installs and manages a Ceph cluster using containers and -systemd, with tight integration with the CLI and dashboard GUI. - -* cephadm only supports Octopus and newer releases. -* cephadm is fully integrated with the new orchestration API and - fully supports the new CLI and dashboard features to manage - cluster deployment. -* cephadm requires container support (podman or docker) and +:ref:`Cephadm ` installs and manages a Ceph +cluster that uses containers and systemd and is tightly integrated with the CLI +and dashboard GUI. + +* cephadm supports only Octopus and newer releases. +* cephadm is fully integrated with the orchestration API and fully supports the + CLI and dashboard features that are used to manage cluster deployment. +* cephadm requires container support (in the form of Podman or Docker) and Python 3. `Rook `_ deploys and manages Ceph clusters running in Kubernetes, while also enabling management of storage resources and -provisioning via Kubernetes APIs. We recommend Rook as the way to run Ceph in +provisioning via Kubernetes APIs. We recommend Rook as the way to run Ceph in Kubernetes or to connect an existing Ceph storage cluster to Kubernetes. -* Rook only supports Nautilus and newer releases of Ceph. +* Rook supports only Nautilus and newer releases of Ceph. * Rook is the preferred method for running Ceph on Kubernetes, or for connecting a Kubernetes cluster to an existing (external) Ceph cluster. -* Rook supports the new orchestrator API. New management features - in the CLI and dashboard are fully supported. +* Rook supports the orchestrator API. Management features in the CLI and + dashboard are fully supported. Other methods ~~~~~~~~~~~~~ @@ -39,16 +38,20 @@ Ceph clusters using Ansible. * ceph-ansible is widely deployed. -* ceph-ansible is not integrated with the new orchestrator APIs, - introduced in Nautlius and Octopus, which means that newer - management features and dashboard integration are not available. +* ceph-ansible is not integrated with the orchestrator APIs that were + introduced in Nautilus and Octopus, which means that the management features + and dashboard integration introduced in Nautilus and Octopus are not + available in Ceph clusters deployed by means of ceph-ansible. -`ceph-deploy `_ is a tool for quickly deploying clusters. +`ceph-deploy `_ is a +tool that can be used to quickly deploy clusters. It is deprecated. .. IMPORTANT:: - ceph-deploy is no longer actively maintained. It is not tested on versions of Ceph newer than Nautilus. It does not support RHEL8, CentOS 8, or newer operating systems. + ceph-deploy is not actively maintained. It is not tested on versions of Ceph + newer than Nautilus. It does not support RHEL8, CentOS 8, or newer operating + systems. `DeepSea `_ installs Ceph using Salt. @@ -67,7 +70,7 @@ Windows ~~~~~~~ -For Windows installations, please consult this document: +For Windows installations, consult this document: `Windows installation guide`_. .. _Windows installation guide: ./windows-install diff -Nru ceph-16.2.11+ds/doc/man/8/ceph-objectstore-tool.rst ceph-16.2.15+ds/doc/man/8/ceph-objectstore-tool.rst --- ceph-16.2.11+ds/doc/man/8/ceph-objectstore-tool.rst 2023-01-24 20:43:13.000000000 +0000 +++ ceph-16.2.15+ds/doc/man/8/ceph-objectstore-tool.rst 2024-02-26 19:21:09.000000000 +0000 @@ -60,6 +60,8 @@ * meta-list * get-osdmap * set-osdmap +* get-superblock +* set-superblock * get-inc-osdmap * set-inc-osdmap * mark-complete @@ -414,7 +416,7 @@ .. option:: --op arg - Arg is one of [info, log, remove, mkfs, fsck, repair, fuse, dup, export, export-remove, import, list, fix-lost, list-pgs, dump-journal, dump-super, meta-list, get-osdmap, set-osdmap, get-inc-osdmap, set-inc-osdmap, mark-complete, reset-last-complete, apply-layout-settings, update-mon-db, dump-export, trim-pg-log] + Arg is one of [info, log, remove, mkfs, fsck, repair, fuse, dup, export, export-remove, import, list, fix-lost, list-pgs, dump-journal, dump-super, meta-list, get-osdmap, set-osdmap, get-superblock, set-superblock, get-inc-osdmap, set-inc-osdmap, mark-complete, reset-last-complete, apply-layout-settings, update-mon-db, dump-export, trim-pg-log] .. option:: --epoch arg @@ -422,7 +424,7 @@ .. option:: --file arg - path of file to export, export-remove, import, get-osdmap, set-osdmap, get-inc-osdmap or set-inc-osdmap + path of file to export, export-remove, import, get-osdmap, set-osdmap, get-superblock, set-superblock, get-inc-osdmap or set-inc-osdmap .. option:: --mon-store-path arg diff -Nru ceph-16.2.11+ds/doc/man/8/ceph.rst ceph-16.2.15+ds/doc/man/8/ceph.rst --- ceph-16.2.11+ds/doc/man/8/ceph.rst 2023-01-24 20:43:13.000000000 +0000 +++ ceph-16.2.15+ds/doc/man/8/ceph.rst 2024-02-26 19:21:09.000000000 +0000 @@ -1314,7 +1314,7 @@ Usage:: - ceph osd tier cache-mode writeback|readproxy|readonly|none + ceph osd tier cache-mode writeback|proxy|readproxy|readonly|none Subcommand ``remove`` removes the tier (the second one) from base pool (the first one). diff -Nru ceph-16.2.11+ds/doc/man/8/cephfs-top.rst ceph-16.2.15+ds/doc/man/8/cephfs-top.rst --- ceph-16.2.11+ds/doc/man/8/cephfs-top.rst 2023-01-24 20:43:13.000000000 +0000 +++ ceph-16.2.15+ds/doc/man/8/cephfs-top.rst 2024-02-26 19:21:09.000000000 +0000 @@ -36,6 +36,22 @@ Perform a selftest. This mode performs a sanity check of ``stats`` module. +.. option:: --conffile [CONFFILE] + + Path to cluster configuration file + +.. option:: -d [DELAY], --delay [DELAY] + + Refresh interval in seconds (default: 1) + +.. option:: --dump + + Dump the metrics to stdout + +.. option:: --dumpfs + + Dump the metrics of the given filesystem to stdout + Descriptions of fields ====================== diff -Nru ceph-16.2.11+ds/doc/man/8/mount.ceph.rst ceph-16.2.15+ds/doc/man/8/mount.ceph.rst --- ceph-16.2.11+ds/doc/man/8/mount.ceph.rst 2023-01-24 20:43:13.000000000 +0000 +++ ceph-16.2.15+ds/doc/man/8/mount.ceph.rst 2024-02-26 19:21:09.000000000 +0000 @@ -110,6 +110,12 @@ them. If an inode contains any stale file locks, read/write on the inode is not allowed until applications release all stale file locks. +:command: `fs=` + Specify the non-default file system to be mounted, when using the old syntax. + +:command: `mds_namespace=` + A synonym of "fs=" (Deprecated). + Advanced -------- :command:`cap_release_safety` @@ -236,6 +242,10 @@ mount.ceph :/ /mnt/mycephfs -o name=fs_username,secretfile=/etc/ceph/fs_username.secret +To mount using the old syntax:: + + mount -t ceph 192.168.0.1:/ /mnt/mycephfs + Availability ============ diff -Nru ceph-16.2.11+ds/doc/man/8/rados.rst ceph-16.2.15+ds/doc/man/8/rados.rst --- ceph-16.2.11+ds/doc/man/8/rados.rst 2023-01-24 20:43:13.000000000 +0000 +++ ceph-16.2.15+ds/doc/man/8/rados.rst 2024-02-26 19:21:09.000000000 +0000 @@ -264,8 +264,8 @@ :command:`append` *name* *infile* Append object name to the cluster with contents from infile. -:command:`rm` *name* - Remove object name. +:command:`rm` [--force-full] *name* ... + Remove object(s) with name(s). With ``--force-full`` will remove when cluster is marked full. :command:`listwatchers` *name* List the watchers of object name. diff -Nru ceph-16.2.11+ds/doc/mgr/modules.rst ceph-16.2.15+ds/doc/mgr/modules.rst --- ceph-16.2.11+ds/doc/mgr/modules.rst 2023-01-24 20:43:13.000000000 +0000 +++ ceph-16.2.15+ds/doc/mgr/modules.rst 2024-02-26 19:21:09.000000000 +0000 @@ -312,6 +312,7 @@ .. automethod:: MgrModule.get_perf_schema .. automethod:: MgrModule.get_counter .. automethod:: MgrModule.get_mgr_id +.. automethod:: MgrModule.get_daemon_health_metrics Exposing health checks ---------------------- diff -Nru ceph-16.2.11+ds/doc/mgr/nfs.rst ceph-16.2.15+ds/doc/mgr/nfs.rst --- ceph-16.2.11+ds/doc/mgr/nfs.rst 2023-01-24 20:43:13.000000000 +0000 +++ ceph-16.2.15+ds/doc/mgr/nfs.rst 2024-02-26 19:21:09.000000000 +0000 @@ -239,7 +239,7 @@ .. code:: bash - $ ceph nfs export create cephfs --cluster-id --pseudo-path --fsname [--readonly] [--path=/path/in/cephfs] [--client_addr ...] [--squash ] + $ ceph nfs export create cephfs --cluster-id --pseudo-path --fsname [--readonly] [--path=/path/in/cephfs] [--client_addr ...] [--squash ] [--sectype ...] This creates export RADOS objects containing the export block, where @@ -266,6 +266,18 @@ value is `no_root_squash`. See the `NFS-Ganesha Export Sample`_ for permissible values. +```` specifies which authentication methods will be used when +connecting to the export. Valid values include "krb5p", "krb5i", "krb5", "sys", +and "none". More than one value can be supplied. The flag may be specified +multiple times (example: ``--sectype=krb5p --sectype=krb5i``) or multiple +values may be separated by a comma (example: ``--sectype krb5p,krb5i``). The +server will negotatiate a supported security type with the client preferring +the supplied methods left-to-right. + +.. note:: Specifying values for sectype that require Kerberos will only function on servers + that are configured to support Kerberos. Setting up NFS-Ganesha to support Kerberos + is outside the scope of this document. + .. note:: Export creation is supported only for NFS Ganesha clusters deployed using nfs interface. Create RGW Export @@ -285,7 +297,7 @@ .. code:: - $ ceph nfs export create rgw --cluster-id --pseudo-path --bucket [--user-id ] [--readonly] [--client_addr ...] [--squash ] + $ ceph nfs export create rgw --cluster-id --pseudo-path --bucket [--user-id ] [--readonly] [--client_addr ...] [--squash ] [--sectype ...] For example, to export *mybucket* via NFS cluster *mynfs* at the pseudo-path */bucketdata* to any host in the ``192.168.10.0/24`` network @@ -316,6 +328,18 @@ value is `no_root_squash`. See the `NFS-Ganesha Export Sample`_ for permissible values. +```` specifies which authentication methods will be used when +connecting to the export. Valid values include "krb5p", "krb5i", "krb5", "sys", +and "none". More than one value can be supplied. The flag may be specified +multiple times (example: ``--sectype=krb5p --sectype=krb5i``) or multiple +values may be separated by a comma (example: ``--sectype krb5p,krb5i``). The +server will negotatiate a supported security type with the client preferring +the supplied methods left-to-right. + +.. note:: Specifying values for sectype that require Kerberos will only function on servers + that are configured to support Kerberos. Setting up NFS-Ganesha to support Kerberos + is outside the scope of this document. + RGW user export ^^^^^^^^^^^^^^^ diff -Nru ceph-16.2.11+ds/doc/mgr/prometheus.rst ceph-16.2.15+ds/doc/mgr/prometheus.rst --- ceph-16.2.11+ds/doc/mgr/prometheus.rst 2023-01-24 20:43:13.000000000 +0000 +++ ceph-16.2.15+ds/doc/mgr/prometheus.rst 2024-02-26 19:21:09.000000000 +0000 @@ -18,9 +18,11 @@ Enabling prometheus output ========================== -The *prometheus* module is enabled with:: +The *prometheus* module is enabled with: - ceph mgr module enable prometheus +.. prompt:: bash $ + + ceph mgr module enable prometheus Configuration ------------- @@ -36,10 +38,10 @@ is registered with Prometheus's `registry `_. -:: - - ceph config set mgr mgr/prometheus/server_addr 0.0.0.0 - ceph config set mgr mgr/prometheus/server_port 9283 +.. prompt:: bash $ + + ceph config set mgr mgr/prometheus/server_addr 0.0.0. + ceph config set mgr mgr/prometheus/server_port 9283 .. warning:: @@ -54,9 +56,11 @@ might be useful to increase the scrape interval. To set a different scrape interval in the Prometheus module, set -``scrape_interval`` to the desired value:: +``scrape_interval`` to the desired value: - ceph config set mgr mgr/prometheus/scrape_interval 20 +.. prompt:: bash $ + + ceph config set mgr mgr/prometheus/scrape_interval 20 On large clusters (>1000 OSDs), the time to fetch the metrics may become significant. Without the cache, the Prometheus manager module could, especially @@ -64,7 +68,7 @@ to unresponsive or crashing Ceph manager instances. Hence, the cache is enabled by default. This means that there is a possibility that the cache becomes stale. The cache is considered stale when the time to fetch the metrics from -Ceph exceeds the configured :confval:``mgr/prometheus/scrape_interval``. +Ceph exceeds the configured ``mgr/prometheus/scrape_interval``. If that is the case, **a warning will be logged** and the module will either @@ -75,35 +79,47 @@ code (service unavailable). You can set other options using the ``ceph config set`` commands. -To tell the module to respond with possibly stale data, set it to ``return``:: +To tell the module to respond with possibly stale data, set it to ``return``: + +.. prompt:: bash $ ceph config set mgr mgr/prometheus/stale_cache_strategy return -To tell the module to respond with "service unavailable", set it to ``fail``:: +To tell the module to respond with "service unavailable", set it to ``fail``: - ceph config set mgr mgr/prometheus/stale_cache_strategy fail +.. prompt:: bash $ -If you are confident that you don't require the cache, you can disable it:: + ceph config set mgr mgr/prometheus/stale_cache_strategy fail - ceph config set mgr mgr/prometheus/cache false +If you are confident that you don't require the cache, you can disable it: + +.. prompt:: bash $ + + ceph config set mgr mgr/prometheus/cache false If you are using the prometheus module behind some kind of reverse proxy or loadbalancer, you can simplify discovering the active instance by switching -to ``error``-mode:: +to ``error``-mode: + +.. prompt:: bash $ - ceph config set mgr mgr/prometheus/standby_behaviour error + ceph config set mgr mgr/prometheus/standby_behaviour error If set, the prometheus module will repond with a HTTP error when requesting ``/`` from the standby instance. The default error code is 500, but you can configure -the HTTP response code with:: +the HTTP response code with: - ceph config set mgr mgr/prometheus/standby_error_status_code 503 +.. prompt:: bash $ + + ceph config set mgr mgr/prometheus/standby_error_status_code 503 Valid error codes are between 400-599. -To switch back to the default behaviour, simply set the config key to ``default``:: +To switch back to the default behaviour, simply set the config key to ``default``: + +.. prompt:: bash $ - ceph config set mgr mgr/prometheus/standby_behaviour default + ceph config set mgr mgr/prometheus/standby_behaviour default .. _prometheus-rbd-io-statistics: @@ -154,9 +170,17 @@ of ``pool[/namespace]`` entries. If the namespace is not specified the statistics are collected for all namespaces in the pool. -Example to activate the RBD-enabled pools ``pool1``, ``pool2`` and ``poolN``:: +Example to activate the RBD-enabled pools ``pool1``, ``pool2`` and ``poolN``: - ceph config set mgr mgr/prometheus/rbd_stats_pools "pool1,pool2,poolN" +.. prompt:: bash $ + + ceph config set mgr mgr/prometheus/rbd_stats_pools "pool1,pool2,poolN" + +The wildcard can be used to indicate all pools or namespaces: + +.. prompt:: bash $ + + ceph config set mgr mgr/prometheus/rbd_stats_pools "*" The module makes the list of all available images scanning the specified pools and namespaces and refreshes it periodically. The period is @@ -165,9 +189,22 @@ force refresh earlier if it detects statistics from a previously unknown RBD image. -Example to turn up the sync interval to 10 minutes:: +Example to turn up the sync interval to 10 minutes: + +.. prompt:: bash $ + + ceph config set mgr mgr/prometheus/rbd_stats_pools_refresh_interval 600 + +Ceph daemon performance counters metrics +----------------------------------------- + +With the introduction of ``ceph-exporter`` daemon, the prometheus module will no longer export Ceph daemon +perf counters as prometheus metrics by default. However, one may re-enable exporting these metrics by setting +the module option ``exclude_perf_counters`` to ``false``: + +.. prompt:: bash $ - ceph config set mgr mgr/prometheus/rbd_stats_pools_refresh_interval 600 + ceph config set mgr mgr/prometheus/exclude_perf_counters false Statistic names and labels ========================== diff -Nru ceph-16.2.11+ds/doc/mgr/telemetry.rst ceph-16.2.15+ds/doc/mgr/telemetry.rst --- ceph-16.2.11+ds/doc/mgr/telemetry.rst 2023-01-24 20:43:13.000000000 +0000 +++ ceph-16.2.15+ds/doc/mgr/telemetry.rst 2024-02-26 19:21:09.000000000 +0000 @@ -153,3 +153,24 @@ ceph config set mgr mgr/telemetry/description 'My first Ceph cluster' ceph config set mgr mgr/telemetry/channel_ident true +Leaderboard +----------- + +To participate in a leaderboard in the `public dashboards +`_, run the following command: + +.. prompt:: bash $ + + ceph config set mgr mgr/telemetry/leaderboard true + +The leaderboard displays basic information about the cluster. This includes the +total storage capacity and the number of OSDs. To add a description of the +cluster, run a command of the following form: + +.. prompt:: bash $ + + ceph config set mgr mgr/telemetry/leaderboard_description 'Ceph cluster for Computational Biology at the University of XYZ' + +If the ``ident`` channel is enabled, its details will not be displayed in the +leaderboard. + diff -Nru ceph-16.2.11+ds/doc/rados/api/libcephsqlite.rst ceph-16.2.15+ds/doc/rados/api/libcephsqlite.rst --- ceph-16.2.11+ds/doc/rados/api/libcephsqlite.rst 2023-01-24 20:43:13.000000000 +0000 +++ ceph-16.2.15+ds/doc/rados/api/libcephsqlite.rst 2024-02-26 19:21:09.000000000 +0000 @@ -426,6 +426,22 @@ striped file. +Debugging +^^^^^^^^^ + +Debugging libcephsqlite can be turned on via:: + + debug_cephsqlite + +If running the ``sqlite3`` command-line tool, use: + +.. code:: sh + + env CEPH_ARGS='--log_to_file true --log-file sqlite3.log --debug_cephsqlite 20 --debug_ms 1' sqlite3 ... + +This will save all the usual Ceph debugging to a file ``sqlite3.log`` for inspection. + + .. _SQLite: https://sqlite.org/index.html .. _SQLite VFS: https://www.sqlite.org/vfs.html .. _SQLite Backup: https://www.sqlite.org/backup.html diff -Nru ceph-16.2.11+ds/doc/rados/configuration/auth-config-ref.rst ceph-16.2.15+ds/doc/rados/configuration/auth-config-ref.rst --- ceph-16.2.11+ds/doc/rados/configuration/auth-config-ref.rst 2023-01-24 20:43:13.000000000 +0000 +++ ceph-16.2.15+ds/doc/rados/configuration/auth-config-ref.rst 2024-02-26 19:21:09.000000000 +0000 @@ -1,107 +1,110 @@ +.. _rados-cephx-config-ref: + ======================== - Cephx Config Reference + CephX Config Reference ======================== -The ``cephx`` protocol is enabled by default. Cryptographic authentication has -some computational costs, though they should generally be quite low. If the -network environment connecting your client and server hosts is very safe and -you cannot afford authentication, you can turn it off. **This is not generally -recommended**. - -.. note:: If you disable authentication, you are at risk of a man-in-the-middle - attack altering your client/server messages, which could lead to disastrous - security effects. - -For creating users, see `User Management`_. For details on the architecture -of Cephx, see `Architecture - High Availability Authentication`_. +The CephX protocol is enabled by default. The cryptographic authentication that +CephX provides has some computational costs, though they should generally be +quite low. If the network environment connecting your client and server hosts +is very safe and you cannot afford authentication, you can disable it. +**Disabling authentication is not generally recommended**. + +.. note:: If you disable authentication, you will be at risk of a + man-in-the-middle attack that alters your client/server messages, which + could have disastrous security effects. + +For information about creating users, see `User Management`_. For details on +the architecture of CephX, see `Architecture - High Availability +Authentication`_. Deployment Scenarios ==================== -There are two main scenarios for deploying a Ceph cluster, which impact -how you initially configure Cephx. Most first time Ceph users use -``cephadm`` to create a cluster (easiest). For clusters using -other deployment tools (e.g., Chef, Juju, Puppet, etc.), you will need -to use the manual procedures or configure your deployment tool to +How you initially configure CephX depends on your scenario. There are two +common strategies for deploying a Ceph cluster. If you are a first-time Ceph +user, you should probably take the easiest approach: using ``cephadm`` to +deploy a cluster. But if your cluster uses other deployment tools (for example, +Ansible, Chef, Juju, or Puppet), you will need either to use the manual +deployment procedures or to configure your deployment tool so that it will bootstrap your monitor(s). Manual Deployment ----------------- -When you deploy a cluster manually, you have to bootstrap the monitor manually -and create the ``client.admin`` user and keyring. To bootstrap monitors, follow -the steps in `Monitor Bootstrapping`_. The steps for monitor bootstrapping are -the logical steps you must perform when using third party deployment tools like -Chef, Puppet, Juju, etc. +When you deploy a cluster manually, it is necessary to bootstrap the monitors +manually and to create the ``client.admin`` user and keyring. To bootstrap +monitors, follow the steps in `Monitor Bootstrapping`_. Follow these steps when +using third-party deployment tools (for example, Chef, Puppet, and Juju). -Enabling/Disabling Cephx +Enabling/Disabling CephX ======================== -Enabling Cephx requires that you have deployed keys for your monitors, -OSDs and metadata servers. If you are simply toggling Cephx on / off, -you do not have to repeat the bootstrapping procedures. +Enabling CephX is possible only if the keys for your monitors, OSDs, and +metadata servers have already been deployed. If you are simply toggling CephX +on or off, it is not necessary to repeat the bootstrapping procedures. - -Enabling Cephx +Enabling CephX -------------- -When ``cephx`` is enabled, Ceph will look for the keyring in the default search -path, which includes ``/etc/ceph/$cluster.$name.keyring``. You can override -this location by adding a ``keyring`` option in the ``[global]`` section of -your `Ceph configuration`_ file, but this is not recommended. +When CephX is enabled, Ceph will look for the keyring in the default search +path: this path includes ``/etc/ceph/$cluster.$name.keyring``. It is possible +to override this search-path location by adding a ``keyring`` option in the +``[global]`` section of your `Ceph configuration`_ file, but this is not +recommended. -Execute the following procedures to enable ``cephx`` on a cluster with -authentication disabled. If you (or your deployment utility) have already +To enable CephX on a cluster for which authentication has been disabled, carry +out the following procedure. If you (or your deployment utility) have already generated the keys, you may skip the steps related to generating keys. #. Create a ``client.admin`` key, and save a copy of the key for your client - host + host: .. prompt:: bash $ ceph auth get-or-create client.admin mon 'allow *' mds 'allow *' mgr 'allow *' osd 'allow *' -o /etc/ceph/ceph.client.admin.keyring - **Warning:** This will clobber any existing + **Warning:** This step will clobber any existing ``/etc/ceph/client.admin.keyring`` file. Do not perform this step if a - deployment tool has already done it for you. Be careful! + deployment tool has already generated a keyring file for you. Be careful! -#. Create a keyring for your monitor cluster and generate a monitor - secret key. +#. Create a monitor keyring and generate a monitor secret key: .. prompt:: bash $ ceph-authtool --create-keyring /tmp/ceph.mon.keyring --gen-key -n mon. --cap mon 'allow *' -#. Copy the monitor keyring into a ``ceph.mon.keyring`` file in every monitor's - ``mon data`` directory. For example, to copy it to ``mon.a`` in cluster ``ceph``, - use the following +#. For each monitor, copy the monitor keyring into a ``ceph.mon.keyring`` file + in the monitor's ``mon data`` directory. For example, to copy the monitor + keyring to ``mon.a`` in a cluster called ``ceph``, run the following + command: .. prompt:: bash $ cp /tmp/ceph.mon.keyring /var/lib/ceph/mon/ceph-a/keyring -#. Generate a secret key for every MGR, where ``{$id}`` is the MGR letter +#. Generate a secret key for every MGR, where ``{$id}`` is the MGR letter: .. prompt:: bash $ ceph auth get-or-create mgr.{$id} mon 'allow profile mgr' mds 'allow *' osd 'allow *' -o /var/lib/ceph/mgr/ceph-{$id}/keyring -#. Generate a secret key for every OSD, where ``{$id}`` is the OSD number +#. Generate a secret key for every OSD, where ``{$id}`` is the OSD number: .. prompt:: bash $ ceph auth get-or-create osd.{$id} mon 'allow rwx' osd 'allow *' -o /var/lib/ceph/osd/ceph-{$id}/keyring -#. Generate a secret key for every MDS, where ``{$id}`` is the MDS letter +#. Generate a secret key for every MDS, where ``{$id}`` is the MDS letter: .. prompt:: bash $ ceph auth get-or-create mds.{$id} mon 'allow rwx' osd 'allow *' mds 'allow *' mgr 'allow profile mds' -o /var/lib/ceph/mds/ceph-{$id}/keyring -#. Enable ``cephx`` authentication by setting the following options in the - ``[global]`` section of your `Ceph configuration`_ file +#. Enable CephX authentication by setting the following options in the + ``[global]`` section of your `Ceph configuration`_ file: .. code-block:: ini @@ -109,23 +112,23 @@ auth_service_required = cephx auth_client_required = cephx - -#. Start or restart the Ceph cluster. See `Operating a Cluster`_ for details. +#. Start or restart the Ceph cluster. For details, see `Operating a Cluster`_. For details on bootstrapping a monitor manually, see `Manual Deployment`_. -Disabling Cephx +Disabling CephX --------------- -The following procedure describes how to disable Cephx. If your cluster -environment is relatively safe, you can offset the computation expense of -running authentication. **We do not recommend it.** However, it may be easier -during setup and/or troubleshooting to temporarily disable authentication. +The following procedure describes how to disable CephX. If your cluster +environment is safe, you might want to disable CephX in order to offset the +computational expense of running authentication. **We do not recommend doing +so.** However, setup and troubleshooting might be easier if authentication is +temporarily disabled and subsequently re-enabled. -#. Disable ``cephx`` authentication by setting the following options in the - ``[global]`` section of your `Ceph configuration`_ file +#. Disable CephX authentication by setting the following options in the + ``[global]`` section of your `Ceph configuration`_ file: .. code-block:: ini @@ -133,8 +136,7 @@ auth_service_required = none auth_client_required = none - -#. Start or restart the Ceph cluster. See `Operating a Cluster`_ for details. +#. Start or restart the Ceph cluster. For details, see `Operating a Cluster`_. Configuration Settings @@ -146,8 +148,9 @@ ``auth_cluster_required`` -:Description: If enabled, the Ceph Storage Cluster daemons (i.e., ``ceph-mon``, - ``ceph-osd``, ``ceph-mds`` and ``ceph-mgr``) must authenticate with +:Description: If this configuration setting is enabled, the Ceph Storage + Cluster daemons (that is, ``ceph-mon``, ``ceph-osd``, + ``ceph-mds``, and ``ceph-mgr``) are required to authenticate with each other. Valid settings are ``cephx`` or ``none``. :Type: String @@ -157,9 +160,9 @@ ``auth_service_required`` -:Description: If enabled, the Ceph Storage Cluster daemons require Ceph Clients - to authenticate with the Ceph Storage Cluster in order to access - Ceph services. Valid settings are ``cephx`` or ``none``. +:Description: If this configuration setting is enabled, then Ceph clients can + access Ceph services only if those clients authenticate with the + Ceph Storage Cluster. Valid settings are ``cephx`` or ``none``. :Type: String :Required: No @@ -168,9 +171,11 @@ ``auth_client_required`` -:Description: If enabled, the Ceph Client requires the Ceph Storage Cluster to - authenticate with the Ceph Client. Valid settings are ``cephx`` - or ``none``. +:Description: If this configuration setting is enabled, then communication + between the Ceph client and Ceph Storage Cluster can be + established only if the Ceph Storage Cluster authenticates + against the Ceph client. Valid settings are ``cephx`` or + ``none``. :Type: String :Required: No @@ -182,30 +187,108 @@ Keys ---- -When you run Ceph with authentication enabled, ``ceph`` administrative commands -and Ceph Clients require authentication keys to access the Ceph Storage Cluster. - -The most common way to provide these keys to the ``ceph`` administrative -commands and clients is to include a Ceph keyring under the ``/etc/ceph`` -directory. For Octopus and later releases using ``cephadm``, the filename -is usually ``ceph.client.admin.keyring`` (or ``$cluster.client.admin.keyring``). -If you include the keyring under the ``/etc/ceph`` directory, you don't need to -specify a ``keyring`` entry in your Ceph configuration file. - -We recommend copying the Ceph Storage Cluster's keyring file to nodes where you -will run administrative commands, because it contains the ``client.admin`` key. +When Ceph is run with authentication enabled, ``ceph`` administrative commands +and Ceph clients can access the Ceph Storage Cluster only if they use +authentication keys. + +The most common way to make these keys available to ``ceph`` administrative +commands and Ceph clients is to include a Ceph keyring under the ``/etc/ceph`` +directory. For Octopus and later releases that use ``cephadm``, the filename is +usually ``ceph.client.admin.keyring``. If the keyring is included in the +``/etc/ceph`` directory, then it is unnecessary to specify a ``keyring`` entry +in the Ceph configuration file. + +Because the Ceph Storage Cluster's keyring file contains the ``client.admin`` +key, we recommend copying the keyring file to nodes from which you run +administrative commands. -To perform this step manually, execute the following: +To perform this step manually, run the following command: .. prompt:: bash $ sudo scp {user}@{ceph-cluster-host}:/etc/ceph/ceph.client.admin.keyring /etc/ceph/ceph.client.admin.keyring -.. tip:: Ensure the ``ceph.keyring`` file has appropriate permissions set - (e.g., ``chmod 644``) on your client machine. +.. tip:: Make sure that the ``ceph.keyring`` file has appropriate permissions + (for example, ``chmod 644``) set on your client machine. + +You can specify the key itself by using the ``key`` setting in the Ceph +configuration file (this approach is not recommended), or instead specify a +path to a keyfile by using the ``keyfile`` setting in the Ceph configuration +file. + +``keyring`` + +:Description: The path to the keyring file. +:Type: String +:Required: No +:Default: ``/etc/ceph/$cluster.$name.keyring,/etc/ceph/$cluster.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin`` + + +``keyfile`` + +:Description: The path to a keyfile (that is, a file containing only the key). +:Type: String +:Required: No +:Default: None + + +``key`` + +:Description: The key (that is, the text string of the key itself). We do not + recommend that you use this setting unless you know what you're + doing. +:Type: String +:Required: No +:Default: None + + +Daemon Keyrings +--------------- + +Administrative users or deployment tools (for example, ``cephadm``) generate +daemon keyrings in the same way that they generate user keyrings. By default, +Ceph stores the keyring of a daemon inside that daemon's data directory. The +default keyring locations and the capabilities that are necessary for the +daemon to function are shown below. + +``ceph-mon`` + +:Location: ``$mon_data/keyring`` +:Capabilities: ``mon 'allow *'`` + +``ceph-osd`` + +:Location: ``$osd_data/keyring`` +:Capabilities: ``mgr 'allow profile osd' mon 'allow profile osd' osd 'allow *'`` + +``ceph-mds`` + +:Location: ``$mds_data/keyring`` +:Capabilities: ``mds 'allow' mgr 'allow profile mds' mon 'allow profile mds' osd 'allow rwx'`` + +``ceph-mgr`` -You may specify the key itself in the Ceph configuration file using the ``key`` -setting (not recommended), or a path to a keyfile using the ``keyfile`` setting. +:Location: ``$mgr_data/keyring`` +:Capabilities: ``mon 'allow profile mgr' mds 'allow *' osd 'allow *'`` + +``radosgw`` + +:Location: ``$rgw_data/keyring`` +:Capabilities: ``mon 'allow rwx' osd 'allow rwx'`` + + +.. note:: The monitor keyring (that is, ``mon.``) contains a key but no + capabilities, and this keyring is not part of the cluster ``auth`` database. + +The daemon's data-directory locations default to directories of the form:: + + /var/lib/ceph/$type/$cluster-$id + +For example, ``osd.12`` would have the following data directory:: + + /var/lib/ceph/osd/ceph-12 + +It is possible to override these locations, but it is not recommended. ``keyring`` @@ -286,16 +369,66 @@ Signatures ---------- -Ceph performs a signature check that provides some limited protection -against messages being tampered with in flight (e.g., by a "man in the -middle" attack). - -Like other parts of Ceph authentication, Ceph provides fine-grained control so -you can enable/disable signatures for service messages between clients and -Ceph, and so you can enable/disable signatures for messages between Ceph daemons. +Ceph performs a signature check that provides some limited protection against +messages being tampered with in flight (for example, by a "man in the middle" +attack). + +As with other parts of Ceph authentication, signatures admit of fine-grained +control. You can enable or disable signatures for service messages between +clients and Ceph, and for messages between Ceph daemons. + +Note that even when signatures are enabled data is not encrypted in flight. -Note that even with signatures enabled data is not encrypted in -flight. +``cephx_require_signatures`` + +:Description: If this configuration setting is set to ``true``, Ceph requires + signatures on all message traffic between the Ceph client and the + Ceph Storage Cluster, and between daemons within the Ceph Storage + Cluster. + +.. note:: + **ANTIQUATED NOTE:** + + Neither Ceph Argonaut nor Linux kernel versions prior to 3.19 + support signatures; if one of these clients is in use, ``cephx_require_signatures`` + can be disabled in order to allow the client to connect. + + +:Type: Boolean +:Required: No +:Default: ``false`` + + +``cephx_cluster_require_signatures`` + +:Description: If this configuration setting is set to ``true``, Ceph requires + signatures on all message traffic between Ceph daemons within the + Ceph Storage Cluster. + +:Type: Boolean +:Required: No +:Default: ``false`` + + +``cephx_service_require_signatures`` + +:Description: If this configuration setting is set to ``true``, Ceph requires + signatures on all message traffic between Ceph clients and the + Ceph Storage Cluster. + +:Type: Boolean +:Required: No +:Default: ``false`` + + +``cephx_sign_messages`` + +:Description: If this configuration setting is set to ``true``, and if the Ceph + version supports message signing, then Ceph will sign all + messages so that they are more difficult to spoof. + +:Type: Boolean +:Default: ``true`` ``cephx_require_signatures`` @@ -346,9 +479,9 @@ ``auth_service_ticket_ttl`` -:Description: When the Ceph Storage Cluster sends a Ceph Client a ticket for - authentication, the Ceph Storage Cluster assigns the ticket a - time to live. +:Description: When the Ceph Storage Cluster sends a ticket for authentication + to a Ceph client, the Ceph Storage Cluster assigns that ticket a + Time To Live (TTL). :Type: Double :Default: ``60*60`` diff -Nru ceph-16.2.11+ds/doc/rados/configuration/bluestore-config-ref.rst ceph-16.2.15+ds/doc/rados/configuration/bluestore-config-ref.rst --- ceph-16.2.11+ds/doc/rados/configuration/bluestore-config-ref.rst 2023-01-24 20:43:13.000000000 +0000 +++ ceph-16.2.15+ds/doc/rados/configuration/bluestore-config-ref.rst 2024-02-26 19:21:09.000000000 +0000 @@ -1,84 +1,95 @@ -========================== -BlueStore Config Reference -========================== +================================== + BlueStore Configuration Reference +================================== Devices ======= -BlueStore manages either one, two, or (in certain cases) three storage -devices. - -In the simplest case, BlueStore consumes a single (primary) storage device. -The storage device is normally used as a whole, occupying the full device that -is managed directly by BlueStore. This *primary device* is normally identified -by a ``block`` symlink in the data directory. - -The data directory is a ``tmpfs`` mount which gets populated (at boot time, or -when ``ceph-volume`` activates it) with all the common OSD files that hold -information about the OSD, like: its identifier, which cluster it belongs to, -and its private keyring. - -It is also possible to deploy BlueStore across one or two additional devices: - -* A *write-ahead log (WAL) device* (identified as ``block.wal`` in the data directory) can be - used for BlueStore's internal journal or write-ahead log. It is only useful - to use a WAL device if the device is faster than the primary device (e.g., - when it is on an SSD and the primary device is an HDD). +BlueStore manages either one, two, or in certain cases three storage devices. +These *devices* are "devices" in the Linux/Unix sense. This means that they are +assets listed under ``/dev`` or ``/devices``. Each of these devices may be an +entire storage drive, or a partition of a storage drive, or a logical volume. +BlueStore does not create or mount a conventional file system on devices that +it uses; BlueStore reads and writes to the devices directly in a "raw" fashion. + +In the simplest case, BlueStore consumes all of a single storage device. This +device is known as the *primary device*. The primary device is identified by +the ``block`` symlink in the data directory. + +The data directory is a ``tmpfs`` mount. When this data directory is booted or +activated by ``ceph-volume``, it is populated with metadata files and links +that hold information about the OSD: for example, the OSD's identifier, the +name of the cluster that the OSD belongs to, and the OSD's private keyring. + +In more complicated cases, BlueStore is deployed across one or two additional +devices: + +* A *write-ahead log (WAL) device* (identified as ``block.wal`` in the data + directory) can be used to separate out BlueStore's internal journal or + write-ahead log. Using a WAL device is advantageous only if the WAL device + is faster than the primary device (for example, if the WAL device is an SSD + and the primary device is an HDD). * A *DB device* (identified as ``block.db`` in the data directory) can be used - for storing BlueStore's internal metadata. BlueStore (or rather, the - embedded RocksDB) will put as much metadata as it can on the DB device to - improve performance. If the DB device fills up, metadata will spill back - onto the primary device (where it would have been otherwise). Again, it is - only helpful to provision a DB device if it is faster than the primary - device. - -If there is only a small amount of fast storage available (e.g., less -than a gigabyte), we recommend using it as a WAL device. If there is -more, provisioning a DB device makes more sense. The BlueStore -journal will always be placed on the fastest device available, so -using a DB device will provide the same benefit that the WAL device -would while *also* allowing additional metadata to be stored there (if -it will fit). This means that if a DB device is specified but an explicit -WAL device is not, the WAL will be implicitly colocated with the DB on the faster -device. + to store BlueStore's internal metadata. BlueStore (or more precisely, the + embedded RocksDB) will put as much metadata as it can on the DB device in + order to improve performance. If the DB device becomes full, metadata will + spill back onto the primary device (where it would have been located in the + absence of the DB device). Again, it is advantageous to provision a DB device + only if it is faster than the primary device. + +If there is only a small amount of fast storage available (for example, less +than a gigabyte), we recommend using the available space as a WAL device. But +if more fast storage is available, it makes more sense to provision a DB +device. Because the BlueStore journal is always placed on the fastest device +available, using a DB device provides the same benefit that using a WAL device +would, while *also* allowing additional metadata to be stored off the primary +device (provided that it fits). DB devices make this possible because whenever +a DB device is specified but an explicit WAL device is not, the WAL will be +implicitly colocated with the DB on the faster device. -A single-device (colocated) BlueStore OSD can be provisioned with: +To provision a single-device (colocated) BlueStore OSD, run the following +command: .. prompt:: bash $ ceph-volume lvm prepare --bluestore --data -To specify a WAL device and/or DB device: +To specify a WAL device or DB device, run the following command: .. prompt:: bash $ ceph-volume lvm prepare --bluestore --data --block.wal --block.db -.. note:: ``--data`` can be a Logical Volume using *vg/lv* notation. Other - devices can be existing logical volumes or GPT partitions. +.. note:: The option ``--data`` can take as its argument any of the the + following devices: logical volumes specified using *vg/lv* notation, + existing logical volumes, and GPT partitions. + + Provisioning strategies ----------------------- -Although there are multiple ways to deploy a BlueStore OSD (unlike Filestore -which had just one), there are two common arrangements that should help clarify -the deployment strategy: + +BlueStore differs from Filestore in that there are several ways to deploy a +BlueStore OSD. However, the overall deployment strategy for BlueStore can be +clarified by examining just these two common arrangements: .. _bluestore-single-type-device-config: **block (data) only** ^^^^^^^^^^^^^^^^^^^^^ -If all devices are the same type, for example all rotational drives, and -there are no fast devices to use for metadata, it makes sense to specify the -block device only and to not separate ``block.db`` or ``block.wal``. The -:ref:`ceph-volume-lvm` command for a single ``/dev/sda`` device looks like: +If all devices are of the same type (for example, they are all HDDs), and if +there are no fast devices available for the storage of metadata, then it makes +sense to specify the block device only and to leave ``block.db`` and +``block.wal`` unseparated. The :ref:`ceph-volume-lvm` command for a single +``/dev/sda`` device is as follows: .. prompt:: bash $ ceph-volume lvm create --bluestore --data /dev/sda -If logical volumes have already been created for each device, (a single LV -using 100% of the device), then the :ref:`ceph-volume-lvm` call for an LV named -``ceph-vg/block-lv`` would look like: +If the devices to be used for a BlueStore OSD are pre-created logical volumes, +then the :ref:`ceph-volume-lvm` call for an logical volume named +``ceph-vg/block-lv`` is as follows: .. prompt:: bash $ @@ -88,15 +99,18 @@ **block and block.db** ^^^^^^^^^^^^^^^^^^^^^^ -If you have a mix of fast and slow devices (SSD / NVMe and rotational), -it is recommended to place ``block.db`` on the faster device while ``block`` -(data) lives on the slower (spinning drive). -You must create these volume groups and logical volumes manually as -the ``ceph-volume`` tool is currently not able to do so automatically. - -For the below example, let us assume four rotational (``sda``, ``sdb``, ``sdc``, and ``sdd``) -and one (fast) solid state drive (``sdx``). First create the volume groups: +If you have a mix of fast and slow devices (for example, SSD or HDD), then we +recommend placing ``block.db`` on the faster device while ``block`` (that is, +the data) is stored on the slower device (that is, the rotational drive). + +You must create these volume groups and these logical volumes manually. as The +``ceph-volume`` tool is currently unable to do so [create them?] automatically. + +The following procedure illustrates the manual creation of volume groups and +logical volumes. For this example, we shall assume four rotational drives +(``sda``, ``sdb``, ``sdc``, and ``sdd``) and one (fast) SSD (``sdx``). First, +to create the volume groups, run the following commands: .. prompt:: bash $ @@ -105,7 +119,7 @@ vgcreate ceph-block-2 /dev/sdc vgcreate ceph-block-3 /dev/sdd -Now create the logical volumes for ``block``: +Next, to create the logical volumes for ``block``, run the following commands: .. prompt:: bash $ @@ -114,8 +128,9 @@ lvcreate -l 100%FREE -n block-2 ceph-block-2 lvcreate -l 100%FREE -n block-3 ceph-block-3 -We are creating 4 OSDs for the four slow spinning devices, so assuming a 200GB -SSD in ``/dev/sdx`` we will create 4 logical volumes, each of 50GB: +Because there are four HDDs, there will be four OSDs. Supposing that there is a +200GB SSD in ``/dev/sdx``, we can create four 50GB logical volumes by running +the following commands: .. prompt:: bash $ @@ -125,7 +140,7 @@ lvcreate -L 50GB -n db-2 ceph-db-0 lvcreate -L 50GB -n db-3 ceph-db-0 -Finally, create the 4 OSDs with ``ceph-volume``: +Finally, to create the four OSDs, run the following commands: .. prompt:: bash $ @@ -134,149 +149,153 @@ ceph-volume lvm create --bluestore --data ceph-block-2/block-2 --block.db ceph-db-0/db-2 ceph-volume lvm create --bluestore --data ceph-block-3/block-3 --block.db ceph-db-0/db-3 -These operations should end up creating four OSDs, with ``block`` on the slower -rotational drives with a 50 GB logical volume (DB) for each on the solid state -drive. +After this procedure is finished, there should be four OSDs, ``block`` should +be on the four HDDs, and each HDD should have a 50GB logical volume +(specifically, a DB device) on the shared SSD. Sizing ====== -When using a :ref:`mixed spinning and solid drive setup -` it is important to make a large enough -``block.db`` logical volume for BlueStore. Generally, ``block.db`` should have -*as large as possible* logical volumes. - -The general recommendation is to have ``block.db`` size in between 1% to 4% -of ``block`` size. For RGW workloads, it is recommended that the ``block.db`` -size isn't smaller than 4% of ``block``, because RGW heavily uses it to store -metadata (omap keys). For example, if the ``block`` size is 1TB, then ``block.db`` shouldn't -be less than 40GB. For RBD workloads, 1% to 2% of ``block`` size is usually enough. - -In older releases, internal level sizes mean that the DB can fully utilize only -specific partition / LV sizes that correspond to sums of L0, L0+L1, L1+L2, -etc. sizes, which with default settings means roughly 3 GB, 30 GB, 300 GB, and -so forth. Most deployments will not substantially benefit from sizing to -accommodate L3 and higher, though DB compaction can be facilitated by doubling -these figures to 6GB, 60GB, and 600GB. - -Improvements in releases beginning with Nautilus 14.2.12 and Octopus 15.2.6 -enable better utilization of arbitrary DB device sizes, and the Pacific -release brings experimental dynamic level support. Users of older releases may -thus wish to plan ahead by provisioning larger DB devices today so that their -benefits may be realized with future upgrades. - -When *not* using a mix of fast and slow devices, it isn't required to create -separate logical volumes for ``block.db`` (or ``block.wal``). BlueStore will -automatically colocate these within the space of ``block``. - +When using a :ref:`mixed spinning-and-solid-drive setup +`, it is important to make a large enough +``block.db`` logical volume for BlueStore. The logical volumes associated with +``block.db`` should have logical volumes that are *as large as possible*. + +It is generally recommended that the size of ``block.db`` be somewhere between +1% and 4% of the size of ``block``. For RGW workloads, it is recommended that +the ``block.db`` be at least 4% of the ``block`` size, because RGW makes heavy +use of ``block.db`` to store metadata (in particular, omap keys). For example, +if the ``block`` size is 1TB, then ``block.db`` should have a size of at least +40GB. For RBD workloads, however, ``block.db`` usually needs no more than 1% to +2% of the ``block`` size. + +In older releases, internal level sizes are such that the DB can fully utilize +only those specific partition / logical volume sizes that correspond to sums of +L0, L0+L1, L1+L2, and so on--that is, given default settings, sizes of roughly +3GB, 30GB, 300GB, and so on. Most deployments do not substantially benefit from +sizing that accommodates L3 and higher, though DB compaction can be facilitated +by doubling these figures to 6GB, 60GB, and 600GB. + +Improvements in Nautilus 14.2.12, Octopus 15.2.6, and subsequent releases allow +for better utilization of arbitrarily-sized DB devices. Moreover, the Pacific +release brings experimental dynamic-level support. Because of these advances, +users of older releases might want to plan ahead by provisioning larger DB +devices today so that the benefits of scale can be realized when upgrades are +made in the future. + +When *not* using a mix of fast and slow devices, there is no requirement to +create separate logical volumes for ``block.db`` or ``block.wal``. BlueStore +will automatically colocate these devices within the space of ``block``. Automatic Cache Sizing ====================== -BlueStore can be configured to automatically resize its caches when TCMalloc -is configured as the memory allocator and the ``bluestore_cache_autotune`` -setting is enabled. This option is currently enabled by default. BlueStore -will attempt to keep OSD heap memory usage under a designated target size via -the ``osd_memory_target`` configuration option. This is a best effort -algorithm and caches will not shrink smaller than the amount specified by -``osd_memory_cache_min``. Cache ratios will be chosen based on a hierarchy -of priorities. If priority information is not available, the -``bluestore_cache_meta_ratio`` and ``bluestore_cache_kv_ratio`` options are -used as fallbacks. +BlueStore can be configured to automatically resize its caches, provided that +certain conditions are met: TCMalloc must be configured as the memory allocator +and the ``bluestore_cache_autotune`` configuration option must be enabled (note +that it is currently enabled by default). When automatic cache sizing is in +effect, BlueStore attempts to keep OSD heap-memory usage under a certain target +size (as determined by ``osd_memory_target``). This approach makes use of a +best-effort algorithm and caches do not shrink smaller than the size defined by +the value of ``osd_memory_cache_min``. Cache ratios are selected in accordance +with a hierarchy of priorities. But if priority information is not available, +the values specified in the ``bluestore_cache_meta_ratio`` and +``bluestore_cache_kv_ratio`` options are used as fallback cache ratios. + Manual Cache Sizing =================== -The amount of memory consumed by each OSD for BlueStore caches is -determined by the ``bluestore_cache_size`` configuration option. If -that config option is not set (i.e., remains at 0), there is a -different default value that is used depending on whether an HDD or -SSD is used for the primary device (set by the -``bluestore_cache_size_ssd`` and ``bluestore_cache_size_hdd`` config -options). - -BlueStore and the rest of the Ceph OSD daemon do the best they can -to work within this memory budget. Note that on top of the configured -cache size, there is also memory consumed by the OSD itself, and -some additional utilization due to memory fragmentation and other -allocator overhead. +The amount of memory consumed by each OSD to be used for its BlueStore cache is +determined by the ``bluestore_cache_size`` configuration option. If that option +has not been specified (that is, if it remains at 0), then Ceph uses a +different configuration option to determine the default memory budget: +``bluestore_cache_size_hdd`` if the primary device is an HDD, or +``bluestore_cache_size_ssd`` if the primary device is an SSD. + +BlueStore and the rest of the Ceph OSD daemon make every effort to work within +this memory budget. Note that in addition to the configured cache size, there +is also memory consumed by the OSD itself. There is additional utilization due +to memory fragmentation and other allocator overhead. -The configured cache memory budget can be used in a few different ways: +The configured cache-memory budget can be used to store the following types of +things: -* Key/Value metadata (i.e., RocksDB's internal cache) +* Key/Value metadata (that is, RocksDB's internal cache) * BlueStore metadata -* BlueStore data (i.e., recently read or written object data) +* BlueStore data (that is, recently read or recently written object data) -Cache memory usage is governed by the following options: -``bluestore_cache_meta_ratio`` and ``bluestore_cache_kv_ratio``. -The fraction of the cache devoted to data -is governed by the effective bluestore cache size (depending on -``bluestore_cache_size[_ssd|_hdd]`` settings and the device class of the primary -device) as well as the meta and kv ratios. -The data fraction can be calculated by -`` * (1 - bluestore_cache_meta_ratio - bluestore_cache_kv_ratio)`` +Cache memory usage is governed by the configuration options +``bluestore_cache_meta_ratio`` and ``bluestore_cache_kv_ratio``. The fraction +of the cache that is reserved for data is governed by both the effective +BlueStore cache size (which depends on the relevant +``bluestore_cache_size[_ssd|_hdd]`` option and the device class of the primary +device) and the "meta" and "kv" ratios. This data fraction can be calculated +with the following formula: `` * (1 - +bluestore_cache_meta_ratio - bluestore_cache_kv_ratio)``. Checksums ========= -BlueStore checksums all metadata and data written to disk. Metadata -checksumming is handled by RocksDB and uses `crc32c`. Data -checksumming is done by BlueStore and can make use of `crc32c`, -`xxhash32`, or `xxhash64`. The default is `crc32c` and should be -suitable for most purposes. - -Full data checksumming does increase the amount of metadata that -BlueStore must store and manage. When possible, e.g., when clients -hint that data is written and read sequentially, BlueStore will -checksum larger blocks, but in many cases it must store a checksum -value (usually 4 bytes) for every 4 kilobyte block of data. - -It is possible to use a smaller checksum value by truncating the -checksum to two or one byte, reducing the metadata overhead. The -trade-off is that the probability that a random error will not be -detected is higher with a smaller checksum, going from about one in -four billion with a 32-bit (4 byte) checksum to one in 65,536 for a -16-bit (2 byte) checksum or one in 256 for an 8-bit (1 byte) checksum. -The smaller checksum values can be used by selecting `crc32c_16` or -`crc32c_8` as the checksum algorithm. +BlueStore checksums all metadata and all data written to disk. Metadata +checksumming is handled by RocksDB and uses the `crc32c` algorithm. By +contrast, data checksumming is handled by BlueStore and can use either +`crc32c`, `xxhash32`, or `xxhash64`. Nonetheless, `crc32c` is the default +checksum algorithm and it is suitable for most purposes. + +Full data checksumming increases the amount of metadata that BlueStore must +store and manage. Whenever possible (for example, when clients hint that data +is written and read sequentially), BlueStore will checksum larger blocks. In +many cases, however, it must store a checksum value (usually 4 bytes) for every +4 KB block of data. + +It is possible to obtain a smaller checksum value by truncating the checksum to +one or two bytes and reducing the metadata overhead. A drawback of this +approach is that it increases the probability of a random error going +undetected: about one in four billion given a 32-bit (4 byte) checksum, 1 in +65,536 given a 16-bit (2 byte) checksum, and 1 in 256 given an 8-bit (1 byte) +checksum. To use the smaller checksum values, select `crc32c_16` or `crc32c_8` +as the checksum algorithm. -The *checksum algorithm* can be set either via a per-pool -``csum_type`` property or the global config option. For example: +The *checksum algorithm* can be specified either via a per-pool ``csum_type`` +configuration option or via the global configuration option. For example: .. prompt:: bash $ ceph osd pool set csum_type + Inline Compression ================== -BlueStore supports inline compression using `snappy`, `zlib`, or -`lz4`. Please note that the `lz4` compression plugin is not -distributed in the official release. - -Whether data in BlueStore is compressed is determined by a combination -of the *compression mode* and any hints associated with a write -operation. The modes are: +BlueStore supports inline compression using `snappy`, `zlib`, `lz4`, or `zstd`. + +Whether data in BlueStore is compressed is determined by two factors: (1) the +*compression mode* and (2) any client hints associated with a write operation. +The compression modes are as follows: * **none**: Never compress data. * **passive**: Do not compress data unless the write operation has a *compressible* hint set. -* **aggressive**: Compress data unless the write operation has an +* **aggressive**: Do compress data unless the write operation has an *incompressible* hint set. * **force**: Try to compress data no matter what. -For more information about the *compressible* and *incompressible* IO -hints, see :c:func:`rados_set_alloc_hint`. +For more information about the *compressible* and *incompressible* I/O hints, +see :c:func:`rados_set_alloc_hint`. -Note that regardless of the mode, if the size of the data chunk is not -reduced sufficiently it will not be used and the original -(uncompressed) data will be stored. For example, if the ``bluestore -compression required ratio`` is set to ``.7`` then the compressed data -must be 70% of the size of the original (or smaller). - -The *compression mode*, *compression algorithm*, *compression required -ratio*, *min blob size*, and *max blob size* can be set either via a -per-pool property or a global config option. Pool properties can be -set with: +Note that data in Bluestore will be compressed only if the data chunk will be +sufficiently reduced in size (as determined by the ``bluestore compression +required ratio`` setting). No matter which compression modes have been used, if +the data chunk is too big, then it will be discarded and the original +(uncompressed) data will be stored instead. For example, if ``bluestore +compression required ratio`` is set to ``.7``, then data compression will take +place only if the size of the compressed data is no more than 70% of the size +of the original data. + +The *compression mode*, *compression algorithm*, *compression required ratio*, +*min blob size*, and *max blob size* settings can be specified either via a +per-pool property or via a global config option. To specify pool properties, +run the following commands: .. prompt:: bash $ @@ -291,192 +310,202 @@ RocksDB Sharding ================ -Internally BlueStore uses multiple types of key-value data, -stored in RocksDB. Each data type in BlueStore is assigned a -unique prefix. Until Pacific all key-value data was stored in -single RocksDB column family: 'default'. Since Pacific, -BlueStore can divide this data into multiple RocksDB column -families. When keys have similar access frequency, modification -frequency and lifetime, BlueStore benefits from better caching -and more precise compaction. This improves performance, and also -requires less disk space during compaction, since each column -family is smaller and can compact independent of others. - -OSDs deployed in Pacific or later use RocksDB sharding by default. -If Ceph is upgraded to Pacific from a previous version, sharding is off. +BlueStore maintains several types of internal key-value data, all of which are +stored in RocksDB. Each data type in BlueStore is assigned a unique prefix. +Prior to the Pacific release, all key-value data was stored in a single RocksDB +column family: 'default'. In Pacific and later releases, however, BlueStore can +divide key-value data into several RocksDB column families. BlueStore achieves +better caching and more precise compaction when keys are similar: specifically, +when keys have similar access frequency, similar modification frequency, and a +similar lifetime. Under such conditions, performance is improved and less disk +space is required during compaction (because each column family is smaller and +is able to compact independently of the others). + +OSDs deployed in Pacific or later releases use RocksDB sharding by default. +However, if Ceph has been upgraded to Pacific or a later version from a +previous version, sharding is disabled on any OSDs that were created before +Pacific. -To enable sharding and apply the Pacific defaults, stop an OSD and run +To enable sharding and apply the Pacific defaults to a specific OSD, stop the +OSD and run the following command: .. prompt:: bash # - ceph-bluestore-tool \ + ceph-bluestore-tool \ --path \ --sharding="m(3) p(3,0-12) O(3,0-13)=block_cache={type=binned_lru} L P" \ reshard -Throttling -========== - SPDK Usage -================== +========== -If you want to use the SPDK driver for NVMe devices, you must prepare your system. -Refer to `SPDK document`__ for more details. +To use the SPDK driver for NVMe devices, you must first prepare your system. +See `SPDK document`__. .. __: http://www.spdk.io/doc/getting_started.html#getting_started_examples -SPDK offers a script to configure the device automatically. Users can run the -script as root: +SPDK offers a script that will configure the device automatically. Run this +script with root permissions: .. prompt:: bash $ sudo src/spdk/scripts/setup.sh -You will need to specify the subject NVMe device's device selector with -the "spdk:" prefix for ``bluestore_block_path``. +You will need to specify the subject NVMe device's device selector with the +"spdk:" prefix for ``bluestore_block_path``. -For example, you can find the device selector of an Intel PCIe SSD with: +In the following example, you first find the device selector of an Intel NVMe +SSD by running the following command: .. prompt:: bash $ - lspci -mm -n -D -d 8086:0953 - -The device selector always has the form of ``DDDD:BB:DD.FF`` or ``DDDD.BB.DD.FF``. + lspci -mm -n -d -d 8086:0953 -and then set:: +The form of the device selector is either ``DDDD:BB:DD.FF`` or +``DDDD.BB.DD.FF``. - bluestore_block_path = "spdk:trtype:PCIe traddr:0000:01:00.0" +Next, supposing that ``0000:01:00.0`` is the device selector found in the +output of the ``lspci`` command, you can specify the device selector by running +the following command:: -Where ``0000:01:00.0`` is the device selector found in the output of ``lspci`` -command above. + bluestore_block_path = "spdk:trtype:pcie traddr:0000:01:00.0" -You may also specify a remote NVMeoF target over the TCP transport as in the +You may also specify a remote NVMeoF target over the TCP transport, as in the following example:: - bluestore_block_path = "spdk:trtype:TCP traddr:10.67.110.197 trsvcid:4420 subnqn:nqn.2019-02.io.spdk:cnode1" + bluestore_block_path = "spdk:trtype:tcp traddr:10.67.110.197 trsvcid:4420 subnqn:nqn.2019-02.io.spdk:cnode1" -To run multiple SPDK instances per node, you must specify the -amount of dpdk memory in MB that each instance will use, to make sure each -instance uses its own DPDK memory. +To run multiple SPDK instances per node, you must make sure each instance uses +its own DPDK memory by specifying for each instance the amount of DPDK memory +(in MB) that the instance will use. -In most cases, a single device can be used for data, DB, and WAL. We describe +In most cases, a single device can be used for data, DB, and WAL. We describe this strategy as *colocating* these components. Be sure to enter the below -settings to ensure that all IOs are issued through SPDK.:: +settings to ensure that all I/Os are issued through SPDK:: bluestore_block_db_path = "" bluestore_block_db_size = 0 bluestore_block_wal_path = "" bluestore_block_wal_size = 0 -Otherwise, the current implementation will populate the SPDK map files with -kernel file system symbols and will use the kernel driver to issue DB/WAL IO. +If these settings are not entered, then the current implementation will +populate the SPDK map files with kernel file system symbols and will use the +kernel driver to issue DB/WAL I/Os. Minimum Allocation Size -======================== +======================= -There is a configured minimum amount of storage that BlueStore will allocate on -an OSD. In practice, this is the least amount of capacity that a RADOS object -can consume. The value of `bluestore_min_alloc_size` is derived from the -value of `bluestore_min_alloc_size_hdd` or `bluestore_min_alloc_size_ssd` -depending on the OSD's ``rotational`` attribute. This means that when an OSD -is created on an HDD, BlueStore will be initialized with the current value -of `bluestore_min_alloc_size_hdd`, and SSD OSDs (including NVMe devices) -with the value of `bluestore_min_alloc_size_ssd`. - -Through the Mimic release, the default values were 64KB and 16KB for rotational -(HDD) and non-rotational (SSD) media respectively. Octopus changed the default -for SSD (non-rotational) media to 4KB, and Pacific changed the default for HDD -(rotational) media to 4KB as well. +There is a configured minimum amount of storage that BlueStore allocates on an +underlying storage device. In practice, this is the least amount of capacity +that even a tiny RADOS object can consume on each OSD's primary device. The +configuration option in question-- ``bluestore_min_alloc_size`` --derives +its value from the value of either ``bluestore_min_alloc_size_hdd`` or +``bluestore_min_alloc_size_ssd``, depending on the OSD's ``rotational`` +attribute. Thus if an OSD is created on an HDD, BlueStore is initialized with +the current value of ``bluestore_min_alloc_size_hdd``; but with SSD OSDs +(including NVMe devices), Bluestore is initialized with the current value of +``bluestore_min_alloc_size_ssd``. + +In Mimic and earlier releases, the default values were 64KB for rotational +media (HDD) and 16KB for non-rotational media (SSD). The Octopus release +changed the the default value for non-rotational media (SSD) to 4KB, and the +Pacific release changed the default value for rotational media (HDD) to 4KB. -These changes were driven by space amplification experienced by Ceph RADOS -GateWay (RGW) deployments that host large numbers of small files +These changes were driven by space amplification that was experienced by Ceph +RADOS GateWay (RGW) deployments that hosted large numbers of small files (S3/Swift objects). -For example, when an RGW client stores a 1KB S3 object, it is written to a -single RADOS object. With the default `min_alloc_size` value, 4KB of -underlying drive space is allocated. This means that roughly -(4KB - 1KB) == 3KB is allocated but never used, which corresponds to 300% -overhead or 25% efficiency. Similarly, a 5KB user object will be stored -as one 4KB and one 1KB RADOS object, again stranding 4KB of device capcity, -though in this case the overhead is a much smaller percentage. Think of this -in terms of the remainder from a modulus operation. The overhead *percentage* -thus decreases rapidly as user object size increases. - -An easily missed additional subtlety is that this -takes place for *each* replica. So when using the default three copies of -data (3R), a 1KB S3 object actually consumes roughly 9KB of storage device -capacity. If erasure coding (EC) is used instead of replication, the -amplification may be even higher: for a ``k=4,m=2`` pool, our 1KB S3 object -will allocate (6 * 4KB) = 24KB of device capacity. +For example, when an RGW client stores a 1 KB S3 object, that object is written +to a single RADOS object. In accordance with the default +``min_alloc_size`` value, 4 KB of underlying drive space is allocated. +This means that roughly 3 KB (that is, 4 KB minus 1 KB) is allocated but never +used: this corresponds to 300% overhead or 25% efficiency. Similarly, a 5 KB +user object will be stored as two RADOS objects, a 4 KB RADOS object and a 1 KB +RADOS object, with the result that 4KB of device capacity is stranded. In this +case, however, the overhead percentage is much smaller. Think of this in terms +of the remainder from a modulus operation. The overhead *percentage* thus +decreases rapidly as object size increases. + +There is an additional subtlety that is easily missed: the amplification +phenomenon just described takes place for *each* replica. For example, when +using the default of three copies of data (3R), a 1 KB S3 object actually +strands roughly 9 KB of storage device capacity. If erasure coding (EC) is used +instead of replication, the amplification might be even higher: for a ``k=4, +m=2`` pool, our 1 KB S3 object allocates 24 KB (that is, 4 KB multiplied by 6) +of device capacity. When an RGW bucket pool contains many relatively large user objects, the effect -of this phenomenon is often negligible, but should be considered for deployments -that expect a signficiant fraction of relatively small objects. +of this phenomenon is often negligible. However, with deployments that can +expect a significant fraction of relatively small user objects, the effect +should be taken into consideration. + +The 4KB default value aligns well with conventional HDD and SSD devices. +However, certain novel coarse-IU (Indirection Unit) QLC SSDs perform and wear +best when ``bluestore_min_alloc_size_ssd`` is specified at OSD creation +to match the device's IU: this might be 8KB, 16KB, or even 64KB. These novel +storage drives can achieve read performance that is competitive with that of +conventional TLC SSDs and write performance that is faster than that of HDDs, +with higher density and lower cost than TLC SSDs. + +Note that when creating OSDs on these novel devices, one must be careful to +apply the non-default value only to appropriate devices, and not to +conventional HDD and SSD devices. Error can be avoided through careful ordering +of OSD creation, with custom OSD device classes, and especially by the use of +central configuration *masks*. + +In Quincy and later releases, you can use the +``bluestore_use_optimal_io_size_for_min_alloc_size`` option to allow +automatic discovery of the correct value as each OSD is created. Note that the +use of ``bcache``, ``OpenCAS``, ``dmcrypt``, ``ATA over Ethernet``, `iSCSI`, or +other device-layering and abstraction technologies might confound the +determination of correct values. Moreover, OSDs deployed on top of VMware +storage have sometimes been found to report a ``rotational`` attribute that +does not match the underlying hardware. + +We suggest inspecting such OSDs at startup via logs and admin sockets in order +to ensure that their behavior is correct. Be aware that this kind of inspection +might not work as expected with older kernels. To check for this issue, +examine the presence and value of ``/sys/block//queue/optimal_io_size``. -The 4KB default value aligns well with conventional HDD and SSD devices. Some -new coarse-IU (Indirection Unit) QLC SSDs however perform and wear best -when `bluestore_min_alloc_size_ssd` -is set at OSD creation to match the device's IU:. 8KB, 16KB, or even 64KB. -These novel storage drives allow one to achieve read performance competitive -with conventional TLC SSDs and write performance faster than HDDs, with -high density and lower cost than TLC SSDs. - -Note that when creating OSDs on these devices, one must carefully apply the -non-default value only to appropriate devices, and not to conventional SSD and -HDD devices. This may be done through careful ordering of OSD creation, custom -OSD device classes, and especially by the use of central configuration _masks_. - -Quincy and later releases add -the `bluestore_use_optimal_io_size_for_min_alloc_size` -option that enables automatic discovery of the appropriate value as each OSD is -created. Note that the use of ``bcache``, ``OpenCAS``, ``dmcrypt``, -``ATA over Ethernet``, `iSCSI`, or other device layering / abstraction -technologies may confound the determination of appropriate values. OSDs -deployed on top of VMware storage have been reported to also -sometimes report a ``rotational`` attribute that does not match the underlying -hardware. - -We suggest inspecting such OSDs at startup via logs and admin sockets to ensure that -behavior is appropriate. Note that this also may not work as desired with -older kernels. You can check for this by examining the presence and value -of ``/sys/block//queue/optimal_io_size``. +.. note:: When running Reef or a later Ceph release, the ``min_alloc_size`` + baked into each OSD is conveniently reported by ``ceph osd metadata``. -You may also inspect a given OSD: +To inspect a specific OSD, run the following command: .. prompt:: bash # - ceph osd metadata osd.1701 | grep rotational + ceph osd metadata osd.1701 | egrep rotational\|alloc -This space amplification may manifest as an unusually high ratio of raw to -stored data reported by ``ceph df``. ``ceph osd df`` may also report -anomalously high ``%USE`` / ``VAR`` values when -compared to other, ostensibly identical OSDs. A pool using OSDs with -mismatched ``min_alloc_size`` values may experience unexpected balancer -behavior as well. - -Note that this BlueStore attribute takes effect *only* at OSD creation; if -changed later, a given OSD's behavior will not change unless / until it is -destroyed and redeployed with the appropriate option value(s). Upgrading -to a later Ceph release will *not* change the value used by OSDs deployed -under older releases or with other settings. +This space amplification might manifest as an unusually high ratio of raw to +stored data as reported by ``ceph df``. There might also be ``%USE`` / ``VAR`` +values reported by ``ceph osd df`` that are unusually high in comparison to +other, ostensibly identical, OSDs. Finally, there might be unexpected balancer +behavior in pools that use OSDs that have mismatched ``min_alloc_size`` values. + +This BlueStore attribute takes effect *only* at OSD creation; if the attribute +is changed later, a specific OSD's behavior will not change unless and until +the OSD is destroyed and redeployed with the appropriate option value(s). +Upgrading to a later Ceph release will *not* change the value used by OSDs that +were deployed under older releases or with other settings. -DSA (Data Streaming Accelerator Usage) +DSA (Data Streaming Accelerator) Usage ====================================== -If you want to use the DML library to drive DSA device for offloading -read/write operations on Persist memory in Bluestore. You need to install -`DML`_ and `idxd-config`_ library in your machine with SPR (Sapphire Rapids) CPU. +If you want to use the DML library to drive the DSA device for offloading +read/write operations on persistent memory (PMEM) in BlueStore, you need to +install `DML`_ and the `idxd-config`_ library. This will work only on machines +that have a SPR (Sapphire Rapids) CPU. -.. _DML: https://github.com/intel/DML +.. _dml: https://github.com/intel/dml .. _idxd-config: https://github.com/intel/idxd-config -After installing the DML software, you need to configure the shared -work queues (WQs) with the following WQ configuration example via accel-config tool: +After installing the DML software, configure the shared work queues (WQs) with +reference to the following WQ configuration example: .. prompt:: bash $ - accel-config config-wq --group-id=1 --mode=shared --wq-size=16 --threshold=15 --type=user --name="MyApp1" --priority=10 --block-on-fault=1 dsa0/wq0.1 + accel-config config-wq --group-id=1 --mode=shared --wq-size=16 --threshold=15 --type=user --name="myapp1" --priority=10 --block-on-fault=1 dsa0/wq0.1 accel-config config-engine dsa0/engine0.1 --group-id=1 accel-config enable-device dsa0 accel-config enable-wq dsa0/wq0.1 diff -Nru ceph-16.2.11+ds/doc/rados/configuration/ceph-conf.rst ceph-16.2.15+ds/doc/rados/configuration/ceph-conf.rst --- ceph-16.2.11+ds/doc/rados/configuration/ceph-conf.rst 2023-01-24 20:43:13.000000000 +0000 +++ ceph-16.2.15+ds/doc/rados/configuration/ceph-conf.rst 2024-02-26 19:21:09.000000000 +0000 @@ -549,33 +549,35 @@ Runtime Changes =============== -In most cases, Ceph allows you to make changes to the configuration of -a daemon at runtime. This capability is quite useful for -increasing/decreasing logging output, enabling/disabling debug -settings, and even for runtime optimization. +In most cases, Ceph permits changes to the configuration of a daemon at +runtime. This can be used for increasing or decreasing the amount of logging +output, for enabling or disabling debug settings, and for runtime optimization. -Generally speaking, configuration options can be updated in the usual -way via the ``ceph config set`` command. For example, do enable the debug log level on a specific OSD: +Configuration options can be updated via the ``ceph config set`` command. For +example, to enable the debug log level on a specific OSD, run a command of this form: .. prompt:: bash $ ceph config set osd.123 debug_ms 20 -Note that if the same option is also customized in a local -configuration file, the monitor setting will be ignored (it has a -lower priority than the local config file). +.. note:: If an option has been customized in a local configuration file, the + `central config + `_ + setting will be ignored (it has a lower priority than the local + configuration file). Override values --------------- -You can also temporarily set an option using the `tell` or `daemon` -interfaces on the Ceph CLI. These *override* values are ephemeral in -that they only affect the running process and are discarded/lost if -the daemon or process restarts. +Options can be set temporarily by using the `tell` or `daemon` interfaces on +the Ceph CLI. These *override* values are ephemeral, which means that they +affect only the current instance of the daemon and revert to persistently +configured values when the daemon restarts. Override values can be set in two ways: -#. From any host, we can send a message to a daemon over the network with: +#. From any host, send a message to a daemon with a command of the following + form: .. prompt:: bash $ @@ -587,16 +589,16 @@ ceph tell osd.123 config set debug_osd 20 - The `tell` command can also accept a wildcard for the daemon - identifier. For example, to adjust the debug level on all OSD - daemons: + The ``tell`` command can also accept a wildcard as the daemon identifier. + For example, to adjust the debug level on all OSD daemons, run a command of + this form: .. prompt:: bash $ ceph tell osd.* config set debug_osd 20 -#. From the host the process is running on, we can connect directly to - the process via a socket in ``/var/run/ceph`` with: +#. On the host where the daemon is running, connect to the daemon via a socket + in ``/var/run/ceph`` by running a command of this form: .. prompt:: bash $ @@ -608,8 +610,8 @@ ceph daemon osd.4 config set debug_osd 20 -Note that in the ``ceph config show`` command output these temporary -values will be shown with a source of ``override``. +.. note:: In the output of the ``ceph config show`` command, these temporary + values are shown with a source of ``override``. Viewing runtime settings diff -Nru ceph-16.2.11+ds/doc/rados/configuration/common.rst ceph-16.2.15+ds/doc/rados/configuration/common.rst --- ceph-16.2.11+ds/doc/rados/configuration/common.rst 2023-01-24 20:43:13.000000000 +0000 +++ ceph-16.2.15+ds/doc/rados/configuration/common.rst 2024-02-26 19:21:09.000000000 +0000 @@ -1,4 +1,3 @@ - .. _ceph-conf-common-settings: Common Settings @@ -7,30 +6,33 @@ The `Hardware Recommendations`_ section provides some hardware guidelines for configuring a Ceph Storage Cluster. It is possible for a single :term:`Ceph Node` to run multiple daemons. For example, a single node with multiple drives -may run one ``ceph-osd`` for each drive. Ideally, you will have a node for a -particular type of process. For example, some nodes may run ``ceph-osd`` -daemons, other nodes may run ``ceph-mds`` daemons, and still other nodes may -run ``ceph-mon`` daemons. - -Each node has a name identified by the ``host`` setting. Monitors also specify -a network address and port (i.e., domain name or IP address) identified by the -``addr`` setting. A basic configuration file will typically specify only -minimal settings for each instance of monitor daemons. For example: +ususally runs one ``ceph-osd`` for each drive. Ideally, each node will be +assigned to a particular type of process. For example, some nodes might run +``ceph-osd`` daemons, other nodes might run ``ceph-mds`` daemons, and still +other nodes might run ``ceph-mon`` daemons. + +Each node has a name. The name of a node can be found in its ``host`` setting. +Monitors also specify a network address and port (that is, a domain name or IP +address) that can be found in the ``addr`` setting. A basic configuration file +typically specifies only minimal settings for each instance of monitor daemons. +For example: -.. code-block:: ini - [global] - mon_initial_members = ceph1 - mon_host = 10.0.0.1 -.. important:: The ``host`` setting is the short name of the node (i.e., not - an fqdn). It is **NOT** an IP address either. Enter ``hostname -s`` on - the command line to retrieve the name of the node. Do not use ``host`` - settings for anything other than initial monitors unless you are deploying - Ceph manually. You **MUST NOT** specify ``host`` under individual daemons - when using deployment tools like ``chef`` or ``cephadm``, as those tools - will enter the appropriate values for you in the cluster map. +.. code-block:: ini + + [global] + mon_initial_members = ceph1 + mon_host = 10.0.0.1 + +.. important:: The ``host`` setting's value is the short name of the node. It + is not an FQDN. It is **NOT** an IP address. To retrieve the name of the + node, enter ``hostname -s`` on the command line. Unless you are deploying + Ceph manually, do not use ``host`` settings for anything other than initial + monitor setup. **DO NOT** specify the ``host`` setting under individual + daemons when using deployment tools like ``chef`` or ``cephadm``. Such tools + are designed to enter the appropriate values for you in the cluster map. .. _ceph-network-config: @@ -38,34 +40,35 @@ Networks ======== -See the `Network Configuration Reference`_ for a detailed discussion about -configuring a network for use with Ceph. +For more about configuring a network for use with Ceph, see the `Network +Configuration Reference`_ . Monitors ======== -Production Ceph clusters typically provision a minimum of three :term:`Ceph Monitor` -daemons to ensure availability should a monitor instance crash. A minimum of -three ensures that the Paxos algorithm can determine which version -of the :term:`Ceph Cluster Map` is the most recent from a majority of Ceph +Ceph production clusters typically provision at least three :term:`Ceph +Monitor` daemons to ensure availability in the event of a monitor instance +crash. A minimum of three :term:`Ceph Monitor` daemons ensures that the Paxos +algorithm is able to determine which version of the :term:`Ceph Cluster Map` is +the most recent. It makes this determination by consulting a majority of Ceph Monitors in the quorum. .. note:: You may deploy Ceph with a single monitor, but if the instance fails, - the lack of other monitors may interrupt data service availability. + the lack of other monitors might interrupt data-service availability. -Ceph Monitors normally listen on port ``3300`` for the new v2 protocol, and ``6789`` for the old v1 protocol. +Ceph Monitors normally listen on port ``3300`` for the new v2 protocol, and on +port ``6789`` for the old v1 protocol. -By default, Ceph expects to store monitor data under the -following path:: +By default, Ceph expects to store monitor data on the following path:: - /var/lib/ceph/mon/$cluster-$id + /var/lib/ceph/mon/$cluster-$id -You or a deployment tool (e.g., ``cephadm``) must create the corresponding -directory. With metavariables fully expressed and a cluster named "ceph", the -foregoing directory would evaluate to:: +You or a deployment tool (for example, ``cephadm``) must create the +corresponding directory. With metavariables fully expressed and a cluster named +"ceph", the path specified in the above example evaluates to:: - /var/lib/ceph/mon/ceph-a + /var/lib/ceph/mon/ceph-a For additional details, see the `Monitor Config Reference`_. @@ -74,22 +77,22 @@ .. _ceph-osd-config: - Authentication ============== .. versionadded:: Bobtail 0.56 -For Bobtail (v 0.56) and beyond, you should expressly enable or disable -authentication in the ``[global]`` section of your Ceph configuration file. +Authentication is explicitly enabled or disabled in the ``[global]`` section of +the Ceph configuration file, as shown here: .. code-block:: ini - auth_cluster_required = cephx - auth_service_required = cephx - auth_client_required = cephx + auth_cluster_required = cephx + auth_service_required = cephx + auth_client_required = cephx -Additionally, you should enable message signing. See `Cephx Config Reference`_ for details. +In addition, you should enable message signing. For details, see `Cephx Config +Reference`_. .. _Cephx Config Reference: ../auth-config-ref @@ -100,65 +103,68 @@ OSDs ==== -Ceph production clusters typically deploy :term:`Ceph OSD Daemons` where one node -has one OSD daemon running a Filestore on one storage device. The BlueStore back -end is now default, but when using Filestore you specify a journal size. For example: +When Ceph production clusters deploy :term:`Ceph OSD Daemons`, the typical +arrangement is that one node has one OSD daemon running Filestore on one +storage device. BlueStore is now the default back end, but when using Filestore +you must specify a journal size. For example: .. code-block:: ini - [osd] - osd_journal_size = 10000 + [osd] + osd_journal_size = 10000 - [osd.0] - host = {hostname} #manual deployments only. + [osd.0] + host = {hostname} #manual deployments only. -By default, Ceph expects to store a Ceph OSD Daemon's data at the -following path:: +By default, Ceph expects to store a Ceph OSD Daemon's data on the following +path:: - /var/lib/ceph/osd/$cluster-$id + /var/lib/ceph/osd/$cluster-$id -You or a deployment tool (e.g., ``cephadm``) must create the corresponding -directory. With metavariables fully expressed and a cluster named "ceph", this -example would evaluate to:: +You or a deployment tool (for example, ``cephadm``) must create the +corresponding directory. With metavariables fully expressed and a cluster named +"ceph", the path specified in the above example evaluates to:: - /var/lib/ceph/osd/ceph-0 + /var/lib/ceph/osd/ceph-0 -You may override this path using the ``osd_data`` setting. We recommend not -changing the default location. Create the default directory on your OSD host. +You can override this path using the ``osd_data`` setting. We recommend that +you do not change the default location. To create the default directory on your +OSD host, run the following commands: .. prompt:: bash $ - ssh {osd-host} - sudo mkdir /var/lib/ceph/osd/ceph-{osd-number} + ssh {osd-host} + sudo mkdir /var/lib/ceph/osd/ceph-{osd-number} -The ``osd_data`` path ideally leads to a mount point with a device that is -separate from the device that contains the operating system and -daemons. If an OSD is to use a device other than the OS device, prepare it for -use with Ceph, and mount it to the directory you just created +The ``osd_data`` path ought to lead to a mount point that has mounted on it a +device that is distinct from the device that contains the operating system and +the daemons. To use a device distinct from the device that contains the +operating system and the daemons, prepare it for use with Ceph and mount it on +the directory you just created by running the following commands: .. prompt:: bash $ - ssh {new-osd-host} - sudo mkfs -t {fstype} /dev/{disk} - sudo mount -o user_xattr /dev/{hdd} /var/lib/ceph/osd/ceph-{osd-number} - -We recommend using the ``xfs`` file system when running -:command:`mkfs`. (``btrfs`` and ``ext4`` are not recommended and are no -longer tested.) + ssh {new-osd-host} + sudo mkfs -t {fstype} /dev/{disk} + sudo mount -o user_xattr /dev/{disk} /var/lib/ceph/osd/ceph-{osd-number} + +We recommend using the ``xfs`` file system when running :command:`mkfs`. (The +``btrfs`` and ``ext4`` file systems are not recommended and are no longer +tested.) -See the `OSD Config Reference`_ for additional configuration details. +For additional configuration details, see `OSD Config Reference`_. Heartbeats ========== During runtime operations, Ceph OSD Daemons check up on other Ceph OSD Daemons -and report their findings to the Ceph Monitor. You do not have to provide any -settings. However, if you have network latency issues, you may wish to modify -the settings. +and report their findings to the Ceph Monitor. This process does not require +you to provide any settings. However, if you have network latency issues, you +might want to modify the default settings. -See `Configuring Monitor/OSD Interaction`_ for additional details. +For additional details, see `Configuring Monitor/OSD Interaction`_. .. _ceph-logging-and-debugging: @@ -166,9 +172,9 @@ Logs / Debugging ================ -Sometimes you may encounter issues with Ceph that require -modifying logging output and using Ceph's debugging. See `Debugging and -Logging`_ for details on log rotation. +You might sometimes encounter issues with Ceph that require you to use Ceph's +logging and debugging features. For details on log rotation, see `Debugging and +Logging`_. .. _Debugging and Logging: ../../troubleshooting/log-and-debug @@ -186,32 +192,29 @@ Running Multiple Clusters (DEPRECATED) ====================================== -Each Ceph cluster has an internal name that is used as part of configuration -and log file names as well as directory and mountpoint names. This name -defaults to "ceph". Previous releases of Ceph allowed one to specify a custom -name instead, for example "ceph2". This was intended to faciliate running -multiple logical clusters on the same physical hardware, but in practice this -was rarely exploited and should no longer be attempted. Prior documentation -could also be misinterpreted as requiring unique cluster names in order to -use ``rbd-mirror``. +Each Ceph cluster has an internal name. This internal name is used as part of +configuration, and as part of "log file" names as well as part of directory +names and as part of mountpoint names. This name defaults to "ceph". Previous +releases of Ceph allowed one to specify a custom name instead, for example +"ceph2". This option was intended to facilitate the running of multiple logical +clusters on the same physical hardware, but in practice it was rarely +exploited. Custom cluster names should no longer be attempted. Old +documentation might lead readers to wrongly think that unique cluster names are +required to use ``rbd-mirror``. They are not required. Custom cluster names are now considered deprecated and the ability to deploy -them has already been removed from some tools, though existing custom name -deployments continue to operate. The ability to run and manage clusters with -custom names may be progressively removed by future Ceph releases, so it is -strongly recommended to deploy all new clusters with the default name "ceph". - -Some Ceph CLI commands accept an optional ``--cluster`` (cluster name) option. This -option is present purely for backward compatibility and need not be accomodated -by new tools and deployments. +them has already been removed from some tools, although existing custom-name +deployments continue to operate. The ability to run and manage clusters with +custom names might be progressively removed by future Ceph releases, so **it is +strongly recommended to deploy all new clusters with the default name "ceph"**. + +Some Ceph CLI commands accept a ``--cluster`` (cluster name) option. This +option is present only for the sake of backward compatibility. New tools and +deployments cannot be relied upon to accommodate this option. -If you do need to allow multiple clusters to exist on the same host, please use +If you need to allow multiple clusters to exist on the same host, use :ref:`cephadm`, which uses containers to fully isolate each cluster. - - - - .. _Hardware Recommendations: ../../../start/hardware-recommendations .. _Network Configuration Reference: ../network-config-ref .. _OSD Config Reference: ../osd-config-ref diff -Nru ceph-16.2.11+ds/doc/rados/configuration/filestore-config-ref.rst ceph-16.2.15+ds/doc/rados/configuration/filestore-config-ref.rst --- ceph-16.2.11+ds/doc/rados/configuration/filestore-config-ref.rst 2023-01-24 20:43:13.000000000 +0000 +++ ceph-16.2.15+ds/doc/rados/configuration/filestore-config-ref.rst 2024-02-26 19:21:09.000000000 +0000 @@ -2,8 +2,14 @@ Filestore Config Reference ============================ -The Filestore back end is no longer the default when creating new OSDs, -though Filestore OSDs are still supported. +.. note:: Since the Luminous release of Ceph, Filestore has not been Ceph's + default storage back end. Since the Luminous release of Ceph, BlueStore has + been Ceph's default storage back end. However, Filestore OSDs are still + supported. See :ref:`OSD Back Ends + `. See :ref:`BlueStore Migration + ` for instructions explaining how to + replace an existing Filestore back end with a BlueStore back end. + ``filestore debug omap check`` @@ -18,26 +24,31 @@ Extended Attributes =================== -Extended Attributes (XATTRs) are important for Filestore OSDs. -Some file systems have limits on the number of bytes that can be stored in XATTRs. -Additionally, in some cases, the file system may not be as fast as an alternative -method of storing XATTRs. The following settings may help improve performance -by using a method of storing XATTRs that is extrinsic to the underlying file system. - -Ceph XATTRs are stored as ``inline xattr``, using the XATTRs provided -by the underlying file system, if it does not impose a size limit. If -there is a size limit (4KB total on ext4, for instance), some Ceph -XATTRs will be stored in a key/value database when either the +Extended Attributes (XATTRs) are important for Filestore OSDs. However, Certain +disadvantages can occur when the underlying file system is used for the storage +of XATTRs: some file systems have limits on the number of bytes that can be +stored in XATTRs, and your file system might in some cases therefore run slower +than would an alternative method of storing XATTRs. For this reason, a method +of storing XATTRs extrinsic to the underlying file system might improve +performance. To implement such an extrinsic method, refer to the following +settings. + +If the underlying file system has no size limit, then Ceph XATTRs are stored as +``inline xattr``, using the XATTRs provided by the file system. But if there is +a size limit (for example, ext4 imposes a limit of 4 KB total), then some Ceph +XATTRs will be stored in a key/value database when the limit is reached. More +precisely, this begins to occur when either the ``filestore_max_inline_xattr_size`` or ``filestore_max_inline_xattrs`` threshold is reached. ``filestore_max_inline_xattr_size`` -:Description: The maximum size of an XATTR stored in the file system (i.e., XFS, - Btrfs, EXT4, etc.) per object. Should not be larger than the - file system can handle. Default value of 0 means to use the value - specific to the underlying file system. +:Description: Defines the maximum size per object of an XATTR that can be + stored in the file system (for example, XFS, Btrfs, ext4). The + specified size should not be larger than the file system can + handle. Using the default value of 0 instructs Filestore to use + the value specific to the file system. :Type: Unsigned 32-bit Integer :Required: No :Default: ``0`` @@ -45,8 +56,9 @@ ``filestore_max_inline_xattr_size_xfs`` -:Description: The maximum size of an XATTR stored in the XFS file system. - Only used if ``filestore_max_inline_xattr_size`` == 0. +:Description: Defines the maximum size of an XATTR that can be stored in the + XFS file system. This setting is used only if + ``filestore_max_inline_xattr_size`` == 0. :Type: Unsigned 32-bit Integer :Required: No :Default: ``65536`` @@ -54,8 +66,9 @@ ``filestore_max_inline_xattr_size_btrfs`` -:Description: The maximum size of an XATTR stored in the Btrfs file system. - Only used if ``filestore_max_inline_xattr_size`` == 0. +:Description: Defines the maximum size of an XATTR that can be stored in the + Btrfs file system. This setting is used only if + ``filestore_max_inline_xattr_size`` == 0. :Type: Unsigned 32-bit Integer :Required: No :Default: ``2048`` @@ -63,8 +76,8 @@ ``filestore_max_inline_xattr_size_other`` -:Description: The maximum size of an XATTR stored in other file systems. - Only used if ``filestore_max_inline_xattr_size`` == 0. +:Description: Defines the maximum size of an XATTR that can be stored in other file systems. + This setting is used only if ``filestore_max_inline_xattr_size`` == 0. :Type: Unsigned 32-bit Integer :Required: No :Default: ``512`` @@ -72,9 +85,8 @@ ``filestore_max_inline_xattrs`` -:Description: The maximum number of XATTRs stored in the file system per object. - Default value of 0 means to use the value specific to the - underlying file system. +:Description: Defines the maximum number of XATTRs per object that can be stored in the file system. + Using the default value of 0 instructs Filestore to use the value specific to the file system. :Type: 32-bit Integer :Required: No :Default: ``0`` @@ -82,8 +94,8 @@ ``filestore_max_inline_xattrs_xfs`` -:Description: The maximum number of XATTRs stored in the XFS file system per object. - Only used if ``filestore_max_inline_xattrs`` == 0. +:Description: Defines the maximum number of XATTRs per object that can be stored in the XFS file system. + This setting is used only if ``filestore_max_inline_xattrs`` == 0. :Type: 32-bit Integer :Required: No :Default: ``10`` @@ -91,8 +103,8 @@ ``filestore_max_inline_xattrs_btrfs`` -:Description: The maximum number of XATTRs stored in the Btrfs file system per object. - Only used if ``filestore_max_inline_xattrs`` == 0. +:Description: Defines the maximum number of XATTRs per object that can be stored in the Btrfs file system. + This setting is used only if ``filestore_max_inline_xattrs`` == 0. :Type: 32-bit Integer :Required: No :Default: ``10`` @@ -100,8 +112,8 @@ ``filestore_max_inline_xattrs_other`` -:Description: The maximum number of XATTRs stored in other file systems per object. - Only used if ``filestore_max_inline_xattrs`` == 0. +:Description: Defines the maximum number of XATTRs per object that can be stored in other file systems. + This setting is used only if ``filestore_max_inline_xattrs`` == 0. :Type: 32-bit Integer :Required: No :Default: ``2`` @@ -111,18 +123,19 @@ Synchronization Intervals ========================= -Filestore needs to periodically quiesce writes and synchronize the -file system, which creates a consistent commit point. It can then free journal -entries up to the commit point. Synchronizing more frequently tends to reduce -the time required to perform synchronization, and reduces the amount of data -that needs to remain in the journal. Less frequent synchronization allows the -backing file system to coalesce small writes and metadata updates more -optimally, potentially resulting in more efficient synchronization at the -expense of potentially increasing tail latency. +Filestore must periodically quiesce writes and synchronize the file system. +Each synchronization creates a consistent commit point. When the commit point +is created, Filestore is able to free all journal entries up to that point. +More-frequent synchronization tends to reduce both synchronization time and +the amount of data that needs to remain in the journal. Less-frequent +synchronization allows the backing file system to coalesce small writes and +metadata updates, potentially increasing synchronization +efficiency but also potentially increasing tail latency. + ``filestore_max_sync_interval`` -:Description: The maximum interval in seconds for synchronizing Filestore. +:Description: Defines the maximum interval (in seconds) for synchronizing Filestore. :Type: Double :Required: No :Default: ``5`` @@ -130,7 +143,7 @@ ``filestore_min_sync_interval`` -:Description: The minimum interval in seconds for synchronizing Filestore. +:Description: Defines the minimum interval (in seconds) for synchronizing Filestore. :Type: Double :Required: No :Default: ``.01`` @@ -142,14 +155,14 @@ ======= The Filestore flusher forces data from large writes to be written out using -``sync_file_range`` before the sync in order to (hopefully) reduce the cost of -the eventual sync. In practice, disabling 'filestore_flusher' seems to improve -performance in some cases. +``sync_file_range`` prior to the synchronization. +Ideally, this action reduces the cost of the eventual synchronization. In practice, however, disabling +'filestore_flusher' seems in some cases to improve performance. ``filestore_flusher`` -:Description: Enables the filestore flusher. +:Description: Enables the Filestore flusher. :Type: Boolean :Required: No :Default: ``false`` @@ -158,7 +171,7 @@ ``filestore_flusher_max_fds`` -:Description: Sets the maximum number of file descriptors for the flusher. +:Description: Defines the maximum number of file descriptors for the flusher. :Type: Integer :Required: No :Default: ``512`` @@ -176,7 +189,7 @@ ``filestore_fsync_flushes_journal_data`` -:Description: Flush journal data during file system synchronization. +:Description: Flushes journal data during file-system synchronization. :Type: Boolean :Required: No :Default: ``false`` @@ -187,11 +200,11 @@ Queue ===== -The following settings provide limits on the size of the Filestore queue. +The following settings define limits on the size of the Filestore queue: ``filestore_queue_max_ops`` -:Description: Defines the maximum number of in progress operations the file store accepts before blocking on queuing new operations. +:Description: Defines the maximum number of in-progress operations that Filestore accepts before it blocks the queueing of any new operations. :Type: Integer :Required: No. Minimal impact on performance. :Default: ``50`` @@ -199,23 +212,20 @@ ``filestore_queue_max_bytes`` -:Description: The maximum number of bytes for an operation. +:Description: Defines the maximum number of bytes permitted per operation. :Type: Integer :Required: No :Default: ``100 << 20`` - - .. index:: filestore; timeouts Timeouts ======== - ``filestore_op_threads`` -:Description: The number of file system operation threads that execute in parallel. +:Description: Defines the number of file-system operation threads that execute in parallel. :Type: Integer :Required: No :Default: ``2`` @@ -223,7 +233,7 @@ ``filestore_op_thread_timeout`` -:Description: The timeout for a file system operation thread (in seconds). +:Description: Defines the timeout (in seconds) for a file-system operation thread. :Type: Integer :Required: No :Default: ``60`` @@ -231,7 +241,7 @@ ``filestore_op_thread_suicide_timeout`` -:Description: The timeout for a commit operation before cancelling the commit (in seconds). +:Description: Defines the timeout (in seconds) for a commit operation before the commit is cancelled. :Type: Integer :Required: No :Default: ``180`` @@ -245,17 +255,17 @@ ``filestore_btrfs_snap`` -:Description: Enable snapshots for a ``btrfs`` filestore. +:Description: Enables snapshots for a ``btrfs`` Filestore. :Type: Boolean -:Required: No. Only used for ``btrfs``. +:Required: No. Used only for ``btrfs``. :Default: ``true`` ``filestore_btrfs_clone_range`` -:Description: Enable cloning ranges for a ``btrfs`` filestore. +:Description: Enables cloning ranges for a ``btrfs`` Filestore. :Type: Boolean -:Required: No. Only used for ``btrfs``. +:Required: No. Used only for ``btrfs``. :Default: ``true`` @@ -267,7 +277,7 @@ ``filestore_journal_parallel`` -:Description: Enables parallel journaling, default for Btrfs. +:Description: Enables parallel journaling, default for ``btrfs``. :Type: Boolean :Required: No :Default: ``false`` @@ -275,7 +285,7 @@ ``filestore_journal_writeahead`` -:Description: Enables writeahead journaling, default for XFS. +:Description: Enables write-ahead journaling, default for XFS. :Type: Boolean :Required: No :Default: ``false`` @@ -283,7 +293,7 @@ ``filestore_journal_trailing`` -:Description: Deprecated, never use. +:Description: Deprecated. **Never use.** :Type: Boolean :Required: No :Default: ``false`` @@ -295,8 +305,8 @@ ``filestore_merge_threshold`` -:Description: Min number of files in a subdir before merging into parent - NOTE: A negative value means to disable subdir merging +:Description: Defines the minimum number of files permitted in a subdirectory before the subdirectory is merged into its parent directory. + NOTE: A negative value means that subdirectory merging is disabled. :Type: Integer :Required: No :Default: ``-10`` @@ -305,8 +315,8 @@ ``filestore_split_multiple`` :Description: ``(filestore_split_multiple * abs(filestore_merge_threshold) + (rand() % filestore_split_rand_factor)) * 16`` - is the maximum number of files in a subdirectory before - splitting into child directories. + is the maximum number of files permitted in a subdirectory + before the subdirectory is split into child directories. :Type: Integer :Required: No @@ -316,10 +326,10 @@ ``filestore_split_rand_factor`` :Description: A random factor added to the split threshold to avoid - too many (expensive) Filestore splits occurring at once. See - ``filestore_split_multiple`` for details. - This can only be changed offline for an existing OSD, - via the ``ceph-objectstore-tool apply-layout-settings`` command. + too many (expensive) Filestore splits occurring at the same time. + For details, see ``filestore_split_multiple``. + To change this setting for an existing OSD, it is necessary to take the OSD + offline before running the ``ceph-objectstore-tool apply-layout-settings`` command. :Type: Unsigned 32-bit Integer :Required: No @@ -328,7 +338,7 @@ ``filestore_update_to`` -:Description: Limits Filestore auto upgrade to specified version. +:Description: Limits automatic upgrades to a specified version of Filestore. Useful in cases in which you want to avoid upgrading to a specific version. :Type: Integer :Required: No :Default: ``1000`` @@ -336,7 +346,7 @@ ``filestore_blackhole`` -:Description: Drop any new transactions on the floor. +:Description: Drops any new transactions on the floor, similar to redirecting to NULL. :Type: Boolean :Required: No :Default: ``false`` @@ -344,7 +354,7 @@ ``filestore_dump_file`` -:Description: File onto which store transaction dumps. +:Description: Defines the file that transaction dumps are stored on. :Type: Boolean :Required: No :Default: ``false`` @@ -352,7 +362,7 @@ ``filestore_kill_at`` -:Description: inject a failure at the n'th opportunity +:Description: Injects a failure at the *n*\th opportunity. :Type: String :Required: No :Default: ``false`` @@ -360,8 +370,7 @@ ``filestore_fail_eio`` -:Description: Fail/Crash on eio. +:Description: Fail/Crash on EIO. :Type: Boolean :Required: No :Default: ``true`` - diff -Nru ceph-16.2.11+ds/doc/rados/configuration/mon-config-ref.rst ceph-16.2.15+ds/doc/rados/configuration/mon-config-ref.rst --- ceph-16.2.11+ds/doc/rados/configuration/mon-config-ref.rst 2023-01-24 20:43:13.000000000 +0000 +++ ceph-16.2.15+ds/doc/rados/configuration/mon-config-ref.rst 2024-02-26 19:21:09.000000000 +0000 @@ -16,24 +16,29 @@ Background ========== -Ceph Monitors maintain a "master copy" of the :term:`Cluster Map`, which means a -:term:`Ceph Client` can determine the location of all Ceph Monitors, Ceph OSD -Daemons, and Ceph Metadata Servers just by connecting to one Ceph Monitor and +Ceph Monitors maintain a "master copy" of the :term:`Cluster Map`. + +The maintenance by Ceph Monitors of a :term:`Cluster Map` makes it possible for +a :term:`Ceph Client` to determine the location of all Ceph Monitors, Ceph OSD +Daemons, and Ceph Metadata Servers by connecting to one Ceph Monitor and retrieving a current cluster map. Before Ceph Clients can read from or write to -Ceph OSD Daemons or Ceph Metadata Servers, they must connect to a Ceph Monitor -first. With a current copy of the cluster map and the CRUSH algorithm, a Ceph -Client can compute the location for any object. The ability to compute object -locations allows a Ceph Client to talk directly to Ceph OSD Daemons, which is a -very important aspect of Ceph's high scalability and performance. See -`Scalability and High Availability`_ for additional details. - -The primary role of the Ceph Monitor is to maintain a master copy of the cluster -map. Ceph Monitors also provide authentication and logging services. Ceph -Monitors write all changes in the monitor services to a single Paxos instance, -and Paxos writes the changes to a key/value store for strong consistency. Ceph -Monitors can query the most recent version of the cluster map during sync -operations. Ceph Monitors leverage the key/value store's snapshots and iterators -(using leveldb) to perform store-wide synchronization. +Ceph OSD Daemons or Ceph Metadata Servers, they must connect to a Ceph Monitor. +When a Ceph client has a current copy of the cluster map and the CRUSH +algorithm, it can compute the location for any RADOS object within in the +cluster. This ability to compute the locations of objects makes it possible for +Ceph Clients to talk directly to Ceph OSD Daemons. This direct communication +with Ceph OSD Daemons represents an improvment upon traditional storage +architectures in which clients were required to communicate with a central +component, and that improvment contributes to Ceph's high scalability and +performance. See `Scalability and High Availability`_ for additional details. + +The Ceph Monitor's primary function is to maintain a master copy of the cluster +map. Monitors also provide authentication and logging services. All changes in +the monitor services are written by the Ceph Monitor to a single Paxos +instance, and Paxos writes the changes to a key/value store for strong +consistency. Ceph Monitors are able to query the most recent version of the +cluster map during sync operations, and they use the key/value store's +snapshots and iterators (using leveldb) to perform store-wide synchronization. .. ditaa:: /-------------\ /-------------\ @@ -56,12 +61,6 @@ | cCCC |*---------------------+ \-------------/ - -.. deprecated:: version 0.58 - -In Ceph versions 0.58 and earlier, Ceph Monitors use a Paxos instance for -each service and store the map as a file. - .. index:: Ceph Monitor; cluster map Cluster Maps diff -Nru ceph-16.2.11+ds/doc/rados/configuration/mon-lookup-dns.rst ceph-16.2.15+ds/doc/rados/configuration/mon-lookup-dns.rst --- ceph-16.2.11+ds/doc/rados/configuration/mon-lookup-dns.rst 2023-01-24 20:43:13.000000000 +0000 +++ ceph-16.2.15+ds/doc/rados/configuration/mon-lookup-dns.rst 2024-02-26 19:21:09.000000000 +0000 @@ -2,15 +2,19 @@ Looking up Monitors through DNS =============================== -Since version 11.0.0 RADOS supports looking up Monitors through DNS. +Since Ceph version 11.0.0 (Kraken), RADOS has supported looking up monitors +through DNS. -This way daemons and clients do not require a *mon host* configuration directive in their ceph.conf configuration file. +The addition of the ability to look up monitors through DNS means that daemons +and clients do not require a *mon host* configuration directive in their +``ceph.conf`` configuration file. + +With a DNS update, clients and daemons can be made aware of changes +in the monitor topology. To be more precise and technical, clients look up the +monitors by using ``DNS SRV TCP`` records. -Using DNS SRV TCP records clients are able to look up the monitors. - -This allows for less configuration on clients and monitors. Using a DNS update clients and daemons can be made aware of changes in the monitor topology. - -By default clients and daemons will look for the TCP service called *ceph-mon* which is configured by the *mon_dns_srv_name* configuration directive. +By default, clients and daemons look for the TCP service called *ceph-mon*, +which is configured by the *mon_dns_srv_name* configuration directive. ``mon dns srv name`` diff -Nru ceph-16.2.11+ds/doc/rados/configuration/ms-ref.rst ceph-16.2.15+ds/doc/rados/configuration/ms-ref.rst --- ceph-16.2.11+ds/doc/rados/configuration/ms-ref.rst 2023-01-24 20:43:13.000000000 +0000 +++ ceph-16.2.15+ds/doc/rados/configuration/ms-ref.rst 2024-02-26 19:21:09.000000000 +0000 @@ -109,17 +109,6 @@ :Default: ``3`` -``ms_async_max_op_threads`` - -:Description: Maximum number of worker threads used by each Async Messenger instance. - Set to lower values when your machine has limited CPU count, and increase - when your CPUs are underutilized (i. e. one or more of CPUs are - constantly on 100% load during I/O operations). -:Type: 64-bit Unsigned Integer -:Required: No -:Default: ``5`` - - ``ms_async_send_inline`` :Description: Send messages directly from the thread that generated them instead of @@ -129,5 +118,3 @@ :Type: Boolean :Required: No :Default: ``false`` - - diff -Nru ceph-16.2.11+ds/doc/rados/configuration/msgr2.rst ceph-16.2.15+ds/doc/rados/configuration/msgr2.rst --- ceph-16.2.11+ds/doc/rados/configuration/msgr2.rst 2023-01-24 20:43:13.000000000 +0000 +++ ceph-16.2.15+ds/doc/rados/configuration/msgr2.rst 2024-02-26 19:21:09.000000000 +0000 @@ -92,8 +92,7 @@ .. note:: The ability to bind to multiple ports has paved the way for dual-stack IPv4 and IPv6 support. That said, dual-stack support is - not yet tested as of Nautilus v14.2.0 and likely needs some - additional code changes to work correctly. + not yet supported as of Quincy v17.2.0. Connection modes ---------------- diff -Nru ceph-16.2.11+ds/doc/rados/configuration/osd-config-ref.rst ceph-16.2.15+ds/doc/rados/configuration/osd-config-ref.rst --- ceph-16.2.11+ds/doc/rados/configuration/osd-config-ref.rst 2023-01-24 20:43:13.000000000 +0000 +++ ceph-16.2.15+ds/doc/rados/configuration/osd-config-ref.rst 2024-02-26 19:21:09.000000000 +0000 @@ -196,6 +196,8 @@ .. index:: OSD; scrubbing +.. _rados_config_scrubbing: + Scrubbing ========= diff -Nru ceph-16.2.11+ds/doc/rados/configuration/pool-pg-config-ref.rst ceph-16.2.15+ds/doc/rados/configuration/pool-pg-config-ref.rst --- ceph-16.2.11+ds/doc/rados/configuration/pool-pg-config-ref.rst 2023-01-24 20:43:13.000000000 +0000 +++ ceph-16.2.15+ds/doc/rados/configuration/pool-pg-config-ref.rst 2024-02-26 19:21:09.000000000 +0000 @@ -4,13 +4,41 @@ .. index:: pools; configuration -When you create pools and set the number of placement groups (PGs) for each, Ceph -uses default values when you don't specifically override the defaults. **We -recommend** overriding some of the defaults. Specifically, we recommend setting -a pool's replica size and overriding the default number of placement groups. You -can specifically set these values when running `pool`_ commands. You can also -override the defaults by adding new ones in the ``[global]`` section of your -Ceph configuration file. +The number of placement groups that the CRUSH algorithm assigns to each pool is +determined by the values of variables in the centralized configuration database +in the monitor cluster. + +Both containerized deployments of Ceph (deployments made using ``cephadm`` or +Rook) and non-containerized deployments of Ceph rely on the values in the +central configuration database in the monitor cluster to assign placement +groups to pools. + +Example Commands +---------------- + +To see the value of the variable that governs the number of placement groups in a given pool, run a command of the following form: + +.. prompt:: bash + + ceph config get osd osd_pool_default_pg_num + +To set the value of the variable that governs the number of placement groups in a given pool, run a command of the following form: + +.. prompt:: bash + + ceph config set osd osd_pool_default_pg_num + +Manual Tuning +------------- +In some cases, it might be advisable to override some of the defaults. For +example, you might determine that it is wise to set a pool's replica size and +to override the default number of placement groups in the pool. You can set +these values when running `pool`_ commands. + +See Also +-------- + +See :ref:`pg-autoscaler`. .. literalinclude:: pool-pg.conf diff -Nru ceph-16.2.11+ds/doc/rados/configuration/storage-devices.rst ceph-16.2.15+ds/doc/rados/configuration/storage-devices.rst --- ceph-16.2.11+ds/doc/rados/configuration/storage-devices.rst 2023-01-24 20:43:13.000000000 +0000 +++ ceph-16.2.15+ds/doc/rados/configuration/storage-devices.rst 2024-02-26 19:21:09.000000000 +0000 @@ -25,6 +25,7 @@ additional monitoring and providing interfaces to external monitoring and management systems. +.. _rados_config_storage_devices_osd_backends: OSD Back Ends ============= diff -Nru ceph-16.2.11+ds/doc/rados/operations/balancer.rst ceph-16.2.15+ds/doc/rados/operations/balancer.rst --- ceph-16.2.11+ds/doc/rados/operations/balancer.rst 2023-01-24 20:43:13.000000000 +0000 +++ ceph-16.2.15+ds/doc/rados/operations/balancer.rst 2024-02-26 19:21:09.000000000 +0000 @@ -3,14 +3,15 @@ Balancer ======== -The *balancer* can optimize the placement of PGs across OSDs in -order to achieve a balanced distribution, either automatically or in a -supervised fashion. +The *balancer* can optimize the allocation of placement groups (PGs) across +OSDs in order to achieve a balanced distribution. The balancer can operate +either automatically or in a supervised fashion. + Status ------ -The current status of the balancer can be checked at any time with: +To check the current status of the balancer, run the following command: .. prompt:: bash $ @@ -20,70 +21,78 @@ Automatic balancing ------------------- -The automatic balancing feature is enabled by default in ``upmap`` -mode. Please refer to :ref:`upmap` for more details. The balancer can be -turned off with: +When the balancer is in ``upmap`` mode, the automatic balancing feature is +enabled by default. For more details, see :ref:`upmap`. To disable the +balancer, run the following command: .. prompt:: bash $ ceph balancer off -The balancer mode can be changed to ``crush-compat`` mode, which is -backward compatible with older clients, and will make small changes to -the data distribution over time to ensure that OSDs are equally utilized. +The balancer mode can be changed from ``upmap`` mode to ``crush-compat`` mode. +``crush-compat`` mode is backward compatible with older clients. In +``crush-compat`` mode, the balancer automatically makes small changes to the +data distribution in order to ensure that OSDs are utilized equally. Throttling ---------- -No adjustments will be made to the PG distribution if the cluster is -degraded (e.g., because an OSD has failed and the system has not yet -healed itself). - -When the cluster is healthy, the balancer will throttle its changes -such that the percentage of PGs that are misplaced (i.e., that need to -be moved) is below a threshold of (by default) 5%. The -``target_max_misplaced_ratio`` threshold can be adjusted with: +If the cluster is degraded (that is, if an OSD has failed and the system hasn't +healed itself yet), then the balancer will not make any adjustments to the PG +distribution. + +When the cluster is healthy, the balancer will incrementally move a small +fraction of unbalanced PGs in order to improve distribution. This fraction +will not exceed a certain threshold that defaults to 5%. To adjust this +``target_max_misplaced_ratio`` threshold setting, run the following command: .. prompt:: bash $ ceph config set mgr target_max_misplaced_ratio .07 # 7% -Set the number of seconds to sleep in between runs of the automatic balancer: +The balancer sleeps between runs. To set the number of seconds for this +interval of sleep, run the following command: .. prompt:: bash $ ceph config set mgr mgr/balancer/sleep_interval 60 -Set the time of day to begin automatic balancing in HHMM format: +To set the time of day (in HHMM format) at which automatic balancing begins, +run the following command: .. prompt:: bash $ ceph config set mgr mgr/balancer/begin_time 0000 -Set the time of day to finish automatic balancing in HHMM format: +To set the time of day (in HHMM format) at which automatic balancing ends, run +the following command: .. prompt:: bash $ ceph config set mgr mgr/balancer/end_time 2359 -Restrict automatic balancing to this day of the week or later. -Uses the same conventions as crontab, 0 is Sunday, 1 is Monday, and so on: +Automatic balancing can be restricted to certain days of the week. To restrict +it to a specific day of the week or later (as with crontab, ``0`` is Sunday, +``1`` is Monday, and so on), run the following command: .. prompt:: bash $ ceph config set mgr mgr/balancer/begin_weekday 0 -Restrict automatic balancing to this day of the week or earlier. -Uses the same conventions as crontab, 0 is Sunday, 1 is Monday, and so on: +To restrict automatic balancing to a specific day of the week or earlier +(again, ``0`` is Sunday, ``1`` is Monday, and so on), run the following +command: .. prompt:: bash $ ceph config set mgr mgr/balancer/end_weekday 6 -Pool IDs to which the automatic balancing will be limited. -The default for this is an empty string, meaning all pools will be balanced. -The numeric pool IDs can be gotten with the :command:`ceph osd pool ls detail` command: +Automatic balancing can be restricted to certain pools. By default, the value +of this setting is an empty string, so that all pools are automatically +balanced. To restrict automatic balancing to specific pools, retrieve their +numeric pool IDs (by running the :command:`ceph osd pool ls detail` command), +and then run the following command: .. prompt:: bash $ @@ -93,43 +102,41 @@ Modes ----- -There are currently two supported balancer modes: +There are two supported balancer modes: -#. **crush-compat**. The CRUSH compat mode uses the compat weight-set - feature (introduced in Luminous) to manage an alternative set of - weights for devices in the CRUSH hierarchy. The normal weights - should remain set to the size of the device to reflect the target - amount of data that we want to store on the device. The balancer - then optimizes the weight-set values, adjusting them up or down in - small increments, in order to achieve a distribution that matches - the target distribution as closely as possible. (Because PG - placement is a pseudorandom process, there is a natural amount of - variation in the placement; by optimizing the weights we - counter-act that natural variation.) - - Notably, this mode is *fully backwards compatible* with older - clients: when an OSDMap and CRUSH map is shared with older clients, - we present the optimized weights as the "real" weights. - - The primary restriction of this mode is that the balancer cannot - handle multiple CRUSH hierarchies with different placement rules if - the subtrees of the hierarchy share any OSDs. (This is normally - not the case, and is generally not a recommended configuration - because it is hard to manage the space utilization on the shared - OSDs.) - -#. **upmap**. Starting with Luminous, the OSDMap can store explicit - mappings for individual OSDs as exceptions to the normal CRUSH - placement calculation. These `upmap` entries provide fine-grained - control over the PG mapping. This CRUSH mode will optimize the - placement of individual PGs in order to achieve a balanced - distribution. In most cases, this distribution is "perfect," which - an equal number of PGs on each OSD (+/-1 PG, since they might not - divide evenly). +#. **crush-compat**. This mode uses the compat weight-set feature (introduced + in Luminous) to manage an alternative set of weights for devices in the + CRUSH hierarchy. When the balancer is operating in this mode, the normal + weights should remain set to the size of the device in order to reflect the + target amount of data intended to be stored on the device. The balancer will + then optimize the weight-set values, adjusting them up or down in small + increments, in order to achieve a distribution that matches the target + distribution as closely as possible. (Because PG placement is a pseudorandom + process, it is subject to a natural amount of variation; optimizing the + weights serves to counteract that natural variation.) + + Note that this mode is *fully backward compatible* with older clients: when + an OSD Map and CRUSH map are shared with older clients, Ceph presents the + optimized weights as the "real" weights. + + The primary limitation of this mode is that the balancer cannot handle + multiple CRUSH hierarchies with different placement rules if the subtrees of + the hierarchy share any OSDs. (Such sharing of OSDs is not typical and, + because of the difficulty of managing the space utilization on the shared + OSDs, is generally not recommended.) + +#. **upmap**. In Luminous and later releases, the OSDMap can store explicit + mappings for individual OSDs as exceptions to the normal CRUSH placement + calculation. These ``upmap`` entries provide fine-grained control over the + PG mapping. This balancer mode optimizes the placement of individual PGs in + order to achieve a balanced distribution. In most cases, the resulting + distribution is nearly perfect: that is, there is an equal number of PGs on + each OSD (±1 PG, since the total number might not divide evenly). - Note that using upmap requires that all clients be Luminous or newer. + To use``upmap``, all clients must be Luminous or newer. -The default mode is ``upmap``. The mode can be adjusted with: +The default mode is ``upmap``. The mode can be changed to ``crush-compat`` by +running the following command: .. prompt:: bash $ @@ -138,69 +145,77 @@ Supervised optimization ----------------------- -The balancer operation is broken into a few distinct phases: +Supervised use of the balancer can be understood in terms of three distinct +phases: -#. building a *plan* -#. evaluating the quality of the data distribution, either for the current PG distribution, or the PG distribution that would result after executing a *plan* -#. executing the *plan* +#. building a plan +#. evaluating the quality of the data distribution, either for the current PG + distribution or for the PG distribution that would result after executing a + plan +#. executing the plan -To evaluate and score the current distribution: +To evaluate the current distribution, run the following command: .. prompt:: bash $ ceph balancer eval -You can also evaluate the distribution for a single pool with: +To evaluate the distribution for a single pool, run the following command: .. prompt:: bash $ ceph balancer eval -Greater detail for the evaluation can be seen with: +To see the evaluation in greater detail, run the following command: .. prompt:: bash $ ceph balancer eval-verbose ... - -The balancer can generate a plan, using the currently configured mode, with: + +To instruct the balancer to generate a plan (using the currently configured +mode), make up a name (any useful identifying string) for the plan, and run the +following command: .. prompt:: bash $ ceph balancer optimize -The name is provided by the user and can be any useful identifying string. The contents of a plan can be seen with: +To see the contents of a plan, run the following command: .. prompt:: bash $ ceph balancer show -All plans can be shown with: +To display all plans, run the following command: .. prompt:: bash $ ceph balancer ls -Old plans can be discarded with: +To discard an old plan, run the following command: .. prompt:: bash $ ceph balancer rm -Currently recorded plans are shown as part of the status command: +To see currently recorded plans, examine the output of the following status +command: .. prompt:: bash $ ceph balancer status -The quality of the distribution that would result after executing a plan can be calculated with: +To evaluate the distribution that would result from executing a specific plan, +run the following command: .. prompt:: bash $ ceph balancer eval -Assuming the plan is expected to improve the distribution (i.e., it has a lower score than the current cluster state), the user can execute that plan with: +If a plan is expected to improve the distribution (that is, the plan's score is +lower than the current cluster state's score), you can execute that plan by +running the following command: .. prompt:: bash $ ceph balancer execute - diff -Nru ceph-16.2.11+ds/doc/rados/operations/bluestore-migration.rst ceph-16.2.15+ds/doc/rados/operations/bluestore-migration.rst --- ceph-16.2.11+ds/doc/rados/operations/bluestore-migration.rst 2023-01-24 20:43:13.000000000 +0000 +++ ceph-16.2.15+ds/doc/rados/operations/bluestore-migration.rst 2024-02-26 19:21:09.000000000 +0000 @@ -1,65 +1,68 @@ +.. _rados_operations_bluestore_migration: + ===================== BlueStore Migration ===================== -Each OSD can run either BlueStore or FileStore, and a single Ceph -cluster can contain a mix of both. Users who have previously deployed -FileStore are likely to want to transition to BlueStore in order to -take advantage of the improved performance and robustness. There are -several strategies for making such a transition. - -An individual OSD cannot be converted in place in isolation, however: -BlueStore and FileStore are simply too different for that to be -practical. "Conversion" will rely either on the cluster's normal -replication and healing support or tools and strategies that copy OSD -content from an old (FileStore) device to a new (BlueStore) one. - - -Deploy new OSDs with BlueStore -============================== - -Any new OSDs (e.g., when the cluster is expanded) can be deployed -using BlueStore. This is the default behavior so no specific change -is needed. - -Similarly, any OSDs that are reprovisioned after replacing a failed drive -can use BlueStore. - -Convert existing OSDs -===================== - -Mark out and replace --------------------- - -The simplest approach is to mark out each device in turn, wait for the -data to replicate across the cluster, reprovision the OSD, and mark -it back in again. It is simple and easy to automate. However, it requires -more data migration than should be necessary, so it is not optimal. +Each OSD must be formatted as either Filestore or BlueStore. However, a Ceph +cluster can operate with a mixture of both Filestore OSDs and BlueStore OSDs. +Because BlueStore is superior to Filestore in performance and robustness, and +because Filestore is not supported by Ceph releases beginning with Reef, users +deploying Filestore OSDs should transition to BlueStore. There are several +strategies for making the transition to BlueStore. + +BlueStore is so different from Filestore that an individual OSD cannot be +converted in place. Instead, the conversion process must use either (1) the +cluster's normal replication and healing support, or (2) tools and strategies +that copy OSD content from an old (Filestore) device to a new (BlueStore) one. + +Deploying new OSDs with BlueStore +================================= + +Use BlueStore when deploying new OSDs (for example, when the cluster is +expanded). Because this is the default behavior, no specific change is +needed. + +Similarly, use BlueStore for any OSDs that have been reprovisioned after +a failed drive was replaced. + +Converting existing OSDs +======================== + +"Mark-``out``" replacement +-------------------------- + +The simplest approach is to verify that the cluster is healthy and +then follow these steps for each Filestore OSD in succession: mark the OSD +``out``, wait for the data to replicate across the cluster, reprovision the OSD, +mark the OSD back ``in``, and wait for recovery to complete before proceeding +to the next OSD. This approach is easy to automate, but it entails unnecessary +data migration that carries costs in time and SSD wear. -#. Identify a FileStore OSD to replace:: +#. Identify a Filestore OSD to replace:: ID= DEVICE= - You can tell whether a given OSD is FileStore or BlueStore with: + #. Determine whether a given OSD is Filestore or BlueStore: - .. prompt:: bash $ + .. prompt:: bash $ - ceph osd metadata $ID | grep osd_objectstore + ceph osd metadata $ID | grep osd_objectstore - You can get a current count of filestore vs bluestore with: + #. Get a current count of Filestore and BlueStore OSDs: - .. prompt:: bash $ + .. prompt:: bash $ - ceph osd count-metadata osd_objectstore + ceph osd count-metadata osd_objectstore -#. Mark the filestore OSD out: +#. Mark a Filestore OSD ``out``: .. prompt:: bash $ ceph osd out $ID -#. Wait for the data to migrate off the OSD in question: +#. Wait for the data to migrate off this OSD: .. prompt:: bash $ @@ -71,7 +74,9 @@ systemctl kill ceph-osd@$ID -#. Make note of which device this OSD is using: + .. _osd_id_retrieval: + +#. Note which device the OSD is using: .. prompt:: bash $ @@ -83,24 +88,27 @@ umount /var/lib/ceph/osd/ceph-$ID -#. Destroy the OSD data. Be *EXTREMELY CAREFUL* as this will destroy - the contents of the device; be certain the data on the device is - not needed (i.e., that the cluster is healthy) before proceeding: +#. Destroy the OSD's data. Be *EXTREMELY CAREFUL*! These commands will destroy + the contents of the device; you must be certain that the data on the device is + not needed (in other words, that the cluster is healthy) before proceeding: .. prompt:: bash $ ceph-volume lvm zap $DEVICE -#. Tell the cluster the OSD has been destroyed (and a new OSD can be - reprovisioned with the same ID): +#. Tell the cluster that the OSD has been destroyed (and that a new OSD can be + reprovisioned with the same OSD ID): .. prompt:: bash $ ceph osd destroy $ID --yes-i-really-mean-it -#. Reprovision a BlueStore OSD in its place with the same OSD ID. - This requires you do identify which device to wipe based on what you saw - mounted above. BE CAREFUL! : +#. Provision a BlueStore OSD in place by using the same OSD ID. This requires + you to identify which device to wipe, and to make certain that you target + the correct and intended device, using the information that was retrieved in + the :ref:`"Note which device the OSD is using" ` step. BE + CAREFUL! Note that you may need to modify these commands when dealing with + hybrid OSDs: .. prompt:: bash $ @@ -108,12 +116,15 @@ #. Repeat. -You can allow the refilling of the replacement OSD to happen -concurrently with the draining of the next OSD, or follow the same -procedure for multiple OSDs in parallel, as long as you ensure the -cluster is fully clean (all data has all replicas) before destroying -any OSDs. Failure to do so will reduce the redundancy of your data -and increase the risk of (or potentially even cause) data loss. +You may opt to (1) have the balancing of the replacement BlueStore OSD take +place concurrently with the draining of the next Filestore OSD, or instead +(2) follow the same procedure for multiple OSDs in parallel. In either case, +however, you must ensure that the cluster is fully clean (in other words, that +all data has all replicas) before destroying any OSDs. If you opt to reprovision +multiple OSDs in parallel, be **very** careful to destroy OSDs only within a +single CRUSH failure domain (for example, ``host`` or ``rack``). Failure to +satisfy this requirement will reduce the redundancy and availability of your +data and increase the risk of data loss (or even guarantee data loss). Advantages: @@ -123,55 +134,53 @@ Disadvantages: -* Data is copied over the network twice: once to some other OSD in the - cluster (to maintain the desired number of replicas), and then again - back to the reprovisioned BlueStore OSD. - - -Whole host replacement ----------------------- - -If you have a spare host in the cluster, or have sufficient free space -to evacuate an entire host in order to use it as a spare, then the -conversion can be done on a host-by-host basis with each stored copy of -the data migrating only once. +* Data is copied over the network twice: once to another OSD in the cluster (to + maintain the specified number of replicas), and again back to the + reprovisioned BlueStore OSD. + +"Whole host" replacement +------------------------ + +If you have a spare host in the cluster, or sufficient free space to evacuate +an entire host for use as a spare, then the conversion can be done on a +host-by-host basis so that each stored copy of the data is migrated only once. + +To use this approach, you need an empty host that has no OSDs provisioned. +There are two ways to do this: either by using a new, empty host that is not +yet part of the cluster, or by offloading data from an existing host that is +already part of the cluster. + +Using a new, empty host +^^^^^^^^^^^^^^^^^^^^^^^ + +Ideally the host will have roughly the same capacity as each of the other hosts +you will be converting. Add the host to the CRUSH hierarchy, but do not attach +it to the root: -First, you need have empty host that has no data. There are two ways to do this: either by starting with a new, empty host that isn't yet part of the cluster, or by offloading data from an existing host that in the cluster. - -Use a new, empty host -^^^^^^^^^^^^^^^^^^^^^ - -Ideally the host should have roughly the -same capacity as other hosts you will be converting (although it -doesn't strictly matter). :: - - NEWHOST= - -Add the host to the CRUSH hierarchy, but do not attach it to the root: .. prompt:: bash $ + NEWHOST= ceph osd crush add-bucket $NEWHOST host -Make sure the ceph packages are installed. +Make sure that Ceph packages are installed on the new host. -Use an existing host -^^^^^^^^^^^^^^^^^^^^ +Using an existing host +^^^^^^^^^^^^^^^^^^^^^^ -If you would like to use an existing host -that is already part of the cluster, and there is sufficient free -space on that host so that all of its data can be migrated off, -then you can instead do:: +If you would like to use an existing host that is already part of the cluster, +and if there is sufficient free space on that host so that all of its data can +be migrated off to other cluster hosts, you can do the following (instead of +using a new, empty host): - OLDHOST= +.. prompt:: bash $ -.. prompt:: bash $ - + OLDHOST= ceph osd crush unlink $OLDHOST default where "default" is the immediate ancestor in the CRUSH map. (For smaller clusters with unmodified configurations this will normally -be "default", but it might also be a rack name.) You should now +be "default", but it might instead be a rack name.) You should now see the host at the top of the OSD tree output with no parent: .. prompt:: bash $ @@ -192,15 +201,18 @@ 2 ssd 1.00000 osd.2 up 1.00000 1.00000 ... -If everything looks good, jump directly to the "Wait for data -migration to complete" step below and proceed from there to clean up -the old OSDs. +If everything looks good, jump directly to the :ref:`"Wait for the data +migration to complete" ` step below and proceed +from there to clean up the old OSDs. Migration process ^^^^^^^^^^^^^^^^^ -If you're using a new host, start at step #1. For an existing host, -jump to step #5 below. +If you're using a new host, start at :ref:`the first step +`. If you're using an existing host, +jump to :ref:`this step `. + +.. _bluestore_migration_process_first_step: #. Provision new BlueStore OSDs for all devices: @@ -208,14 +220,14 @@ ceph-volume lvm create --bluestore --data /dev/$DEVICE -#. Verify OSDs join the cluster with: +#. Verify that the new OSDs have joined the cluster: .. prompt:: bash $ ceph osd tree You should see the new host ``$NEWHOST`` with all of the OSDs beneath - it, but the host should *not* be nested beneath any other node in + it, but the host should *not* be nested beneath any other node in the hierarchy (like ``root default``). For example, if ``newhost`` is the empty host, you might see something like:: @@ -244,13 +256,16 @@ ceph osd crush swap-bucket $NEWHOST $OLDHOST - At this point all data on ``$OLDHOST`` will start migrating to OSDs - on ``$NEWHOST``. If there is a difference in the total capacity of - the old and new hosts you may also see some data migrate to or from - other nodes in the cluster, but as long as the hosts are similarly - sized this will be a relatively small amount of data. + At this point all data on ``$OLDHOST`` will begin migrating to the OSDs on + ``$NEWHOST``. If there is a difference between the total capacity of the + old hosts and the total capacity of the new hosts, you may also see some + data migrate to or from other nodes in the cluster. Provided that the hosts + are similarly sized, however, this will be a relatively small amount of + data. -#. Wait for data migration to complete: + .. _bluestore_data_migration_step: + +#. Wait for the data migration to complete: .. prompt:: bash $ @@ -261,8 +276,8 @@ .. prompt:: bash $ ssh $OLDHOST - systemctl kill ceph-osd.target - umount /var/lib/ceph/osd/ceph-* + systemctl kill ceph-osd.target + umount /var/lib/ceph/osd/ceph-* #. Destroy and purge the old OSDs: @@ -270,69 +285,71 @@ for osd in `ceph osd ls-tree $OLDHOST`; do ceph osd purge $osd --yes-i-really-mean-it - done + done -#. Wipe the old OSD devices. This requires you do identify which - devices are to be wiped manually (BE CAREFUL!). For each device: +#. Wipe the old OSDs. This requires you to identify which devices are to be + wiped manually. BE CAREFUL! For each device: .. prompt:: bash $ ceph-volume lvm zap $DEVICE -#. Use the now-empty host as the new host, and repeat:: +#. Use the now-empty host as the new host, and repeat: + + .. prompt:: bash $ - NEWHOST=$OLDHOST + NEWHOST=$OLDHOST Advantages: * Data is copied over the network only once. -* Converts an entire host's OSDs at once. -* Can parallelize to converting multiple hosts at a time. -* No spare devices are required on each host. +* An entire host's OSDs are converted at once. +* Can be parallelized, to make possible the conversion of multiple hosts at the same time. +* No host involved in this process needs to have a spare device. Disadvantages: * A spare host is required. -* An entire host's worth of OSDs will be migrating data at a time. This - is like likely to impact overall cluster performance. +* An entire host's worth of OSDs will be migrating data at a time. This + is likely to impact overall cluster performance. * All migrated data still makes one full hop over the network. - Per-OSD device copy ------------------- - A single logical OSD can be converted by using the ``copy`` function -of ``ceph-objectstore-tool``. This requires that the host have a free -device (or devices) to provision a new, empty BlueStore OSD. For -example, if each host in your cluster has 12 OSDs, then you'd need a -13th available device so that each OSD can be converted in turn before the -old device is reclaimed to convert the next OSD. +included in ``ceph-objectstore-tool``. This requires that the host have one or more free +devices to provision a new, empty BlueStore OSD. For +example, if each host in your cluster has twelve OSDs, then you need a +thirteenth unused OSD so that each OSD can be converted before the +previous OSD is reclaimed to convert the next OSD. Caveats: -* This strategy requires that a blank BlueStore OSD be prepared - without allocating a new OSD ID, something that the ``ceph-volume`` - tool doesn't support. More importantly, the setup of *dmcrypt* is - closely tied to the OSD identity, which means that this approach - does not work with encrypted OSDs. +* This approach requires that we prepare an empty BlueStore OSD but that we do not allocate + a new OSD ID to it. The ``ceph-volume`` tool does not support such an operation. **IMPORTANT:** + because the setup of *dmcrypt* is closely tied to the identity of the OSD, this approach does not + work with encrypted OSDs. * The device must be manually partitioned. -* Tooling not implemented! - -* Not documented! +* An unsupported user-contributed script that demonstrates this process may be found here: + https://github.com/ceph/ceph/blob/master/src/script/contrib/ceph-migrate-bluestore.bash Advantages: -* Little or no data migrates over the network during the conversion. +* Provided that the 'noout' or the 'norecover'/'norebalance' flags are set on the OSD or the + cluster while the conversion process is underway, little or no data migrates over the + network during the conversion. Disadvantages: -* Tooling not fully implemented. -* Process not documented. -* Each host must have a spare or empty device. -* The OSD is offline during the conversion, which means new writes will - be written to only a subset of the OSDs. This increases the risk of data - loss due to a subsequent failure. (However, if there is a failure before - conversion is complete, the original FileStore OSD can be started to provide - access to its original data.) +* Tooling is not fully implemented, supported, or documented. + +* Each host must have an appropriate spare or empty device for staging. + +* The OSD is offline during the conversion, which means new writes to PGs + with the OSD in their acting set may not be ideally redundant until the + subject OSD comes up and recovers. This increases the risk of data + loss due to an overlapping failure. However, if another OSD fails before + conversion and startup have completed, the original Filestore OSD can be + started to provide access to its original data. diff -Nru ceph-16.2.11+ds/doc/rados/operations/cache-tiering.rst ceph-16.2.15+ds/doc/rados/operations/cache-tiering.rst --- ceph-16.2.11+ds/doc/rados/operations/cache-tiering.rst 2023-01-24 20:43:13.000000000 +0000 +++ ceph-16.2.15+ds/doc/rados/operations/cache-tiering.rst 2024-02-26 19:21:09.000000000 +0000 @@ -1,6 +1,10 @@ =============== Cache Tiering =============== +.. warning:: Cache tiering has been deprecated in the Reef release as it + has lacked a maintainer for a very long time. This does not mean + it will be certainly removed, but we may choose to remove it + without much further notice. A cache tier provides Ceph Clients with better I/O performance for a subset of the data stored in a backing storage tier. Cache tiering involves creating a diff -Nru ceph-16.2.11+ds/doc/rados/operations/control.rst ceph-16.2.15+ds/doc/rados/operations/control.rst --- ceph-16.2.11+ds/doc/rados/operations/control.rst 2023-01-24 20:43:13.000000000 +0000 +++ ceph-16.2.15+ds/doc/rados/operations/control.rst 2024-02-26 19:21:09.000000000 +0000 @@ -584,11 +584,11 @@ A dump of the monitor state: - .. prompt:: bash $ +.. prompt:: bash $ - ceph mon dump + ceph mon dump - :: +:: dumped monmap epoch 2 epoch 2 diff -Nru ceph-16.2.11+ds/doc/rados/operations/crush-map.rst ceph-16.2.15+ds/doc/rados/operations/crush-map.rst --- ceph-16.2.11+ds/doc/rados/operations/crush-map.rst 2023-01-24 20:43:13.000000000 +0000 +++ ceph-16.2.15+ds/doc/rados/operations/crush-map.rst 2024-02-26 19:21:09.000000000 +0000 @@ -315,7 +315,7 @@ .. prompt:: bash $ - ceph osd tree + ceph osd crush tree When both *compat* and *per-pool* weight sets are in use, data placement for a particular pool will use its own per-pool weight set diff -Nru ceph-16.2.11+ds/doc/rados/operations/data-placement.rst ceph-16.2.15+ds/doc/rados/operations/data-placement.rst --- ceph-16.2.11+ds/doc/rados/operations/data-placement.rst 2023-01-24 20:43:13.000000000 +0000 +++ ceph-16.2.15+ds/doc/rados/operations/data-placement.rst 2024-02-26 19:21:09.000000000 +0000 @@ -2,40 +2,44 @@ Data Placement Overview ========================= -Ceph stores, replicates and rebalances data objects across a RADOS cluster -dynamically. With many different users storing objects in different pools for -different purposes on countless OSDs, Ceph operations require some data -placement planning. The main data placement planning concepts in Ceph include: - -- **Pools:** Ceph stores data within pools, which are logical groups for storing - objects. Pools manage the number of placement groups, the number of replicas, - and the CRUSH rule for the pool. To store data in a pool, you must have - an authenticated user with permissions for the pool. Ceph can snapshot pools. - See `Pools`_ for additional details. - -- **Placement Groups:** Ceph maps objects to placement groups (PGs). - Placement groups (PGs) are shards or fragments of a logical object pool - that place objects as a group into OSDs. Placement groups reduce the amount - of per-object metadata when Ceph stores the data in OSDs. A larger number of - placement groups (e.g., 100 per OSD) leads to better balancing. See - `Placement Groups`_ for additional details. - -- **CRUSH Maps:** CRUSH is a big part of what allows Ceph to scale without - performance bottlenecks, without limitations to scalability, and without a - single point of failure. CRUSH maps provide the physical topology of the - cluster to the CRUSH algorithm to determine where the data for an object - and its replicas should be stored, and how to do so across failure domains - for added data safety among other things. See `CRUSH Maps`_ for additional - details. - -- **Balancer:** The balancer is a feature that will automatically optimize the - distribution of PGs across devices to achieve a balanced data distribution, - maximizing the amount of data that can be stored in the cluster and evenly - distributing the workload across OSDs. - -When you initially set up a test cluster, you can use the default values. Once -you begin planning for a large Ceph cluster, refer to pools, placement groups -and CRUSH for data placement operations. +Ceph stores, replicates, and rebalances data objects across a RADOS cluster +dynamically. Because different users store objects in different pools for +different purposes on many OSDs, Ceph operations require a certain amount of +data- placement planning. The main data-placement planning concepts in Ceph +include: + +- **Pools:** Ceph stores data within pools, which are logical groups used for + storing objects. Pools manage the number of placement groups, the number of + replicas, and the CRUSH rule for the pool. To store data in a pool, it is + necessary to be an authenticated user with permissions for the pool. Ceph is + able to make snapshots of pools. For additional details, see `Pools`_. + +- **Placement Groups:** Ceph maps objects to placement groups. Placement + groups (PGs) are shards or fragments of a logical object pool that place + objects as a group into OSDs. Placement groups reduce the amount of + per-object metadata that is necessary for Ceph to store the data in OSDs. A + greater number of placement groups (for example, 100 PGs per OSD as compared + with 50 PGs per OSD) leads to better balancing. + +- **CRUSH Maps:** CRUSH plays a major role in allowing Ceph to scale while + avoiding certain pitfalls, such as performance bottlenecks, limitations to + scalability, and single points of failure. CRUSH maps provide the physical + topology of the cluster to the CRUSH algorithm, so that it can determine both + (1) where the data for an object and its replicas should be stored and (2) + how to store that data across failure domains so as to improve data safety. + For additional details, see `CRUSH Maps`_. + +- **Balancer:** The balancer is a feature that automatically optimizes the + distribution of placement groups across devices in order to achieve a + balanced data distribution, in order to maximize the amount of data that can + be stored in the cluster, and in order to evenly distribute the workload + across OSDs. + +It is possible to use the default values for each of the above components. +Default values are recommended for a test cluster's initial setup. However, +when planning a large Ceph cluster, values should be customized for +data-placement operations with reference to the different roles played by +pools, placement groups, and CRUSH. .. _Pools: ../pools .. _Placement Groups: ../placement-groups diff -Nru ceph-16.2.11+ds/doc/rados/operations/devices.rst ceph-16.2.15+ds/doc/rados/operations/devices.rst --- ceph-16.2.11+ds/doc/rados/operations/devices.rst 2023-01-24 20:43:13.000000000 +0000 +++ ceph-16.2.15+ds/doc/rados/operations/devices.rst 2024-02-26 19:21:09.000000000 +0000 @@ -3,28 +3,32 @@ Device Management ================= -Ceph tracks which hardware storage devices (e.g., HDDs, SSDs) are consumed by -which daemons, and collects health metrics about those devices in order to -provide tools to predict and/or automatically respond to hardware failure. +Device management allows Ceph to address hardware failure. Ceph tracks hardware +storage devices (HDDs, SSDs) to see which devices are managed by which daemons. +Ceph also collects health metrics about these devices. By doing so, Ceph can +provide tools that predict hardware failure and can automatically respond to +hardware failure. Device tracking --------------- -You can query which storage devices are in use with: +To see a list of the storage devices that are in use, run the following +command: .. prompt:: bash $ ceph device ls -You can also list devices by daemon or by host: +Alternatively, to list devices by daemon or by host, run a command of one of +the following forms: .. prompt:: bash $ ceph device ls-by-daemon ceph device ls-by-host -For any individual device, you can query information about its -location and how it is being consumed with: +To see information about the location of an specific device and about how the +device is being consumed, run a command of the following form: .. prompt:: bash $ @@ -33,103 +37,107 @@ Identifying physical devices ---------------------------- -You can blink the drive LEDs on hardware enclosures to make the replacement of -failed disks easy and less error-prone. Use the following command:: +To make the replacement of failed disks easier and less error-prone, you can +(in some cases) "blink" the drive's LEDs on hardware enclosures by running a +command of the following form:: device light on|off [ident|fault] [--force] -The ```` parameter is the device identification. You can obtain this -information using the following command: +.. note:: Using this command to blink the lights might not work. Whether it + works will depend upon such factors as your kernel revision, your SES + firmware, or the setup of your HBA. + +The ```` parameter is the device identification. To retrieve this +information, run the following command: .. prompt:: bash $ ceph device ls -The ``[ident|fault]`` parameter is used to set the kind of light to blink. -By default, the `identification` light is used. +The ``[ident|fault]`` parameter determines which kind of light will blink. By +default, the `identification` light is used. -.. note:: - This command needs the Cephadm or the Rook `orchestrator `_ module enabled. - The orchestrator module enabled is shown by executing the following command: +.. note:: This command works only if the Cephadm or the Rook `orchestrator + `_ + module is enabled. To see which orchestrator module is enabled, run the + following command: .. prompt:: bash $ ceph orch status -The command behind the scene to blink the drive LEDs is `lsmcli`. If you need -to customize this command you can configure this via a Jinja2 template:: +The command that makes the drive's LEDs blink is `lsmcli`. To customize this +command, configure it via a Jinja2 template by running commands of the +following forms:: ceph config-key set mgr/cephadm/blink_device_light_cmd "