ODF Declarative Plan
This is the holding area for the ODF manifests we want to apply once that path is ready. They are staged for review, not applied by default.
Files:
generated/odf/localvolumediscovery-auto-discover-devices.yamlgenerated/odf/localvolumeset-ceph-osd.yamlgenerated/odf/storagecluster-ocs-storagecluster.yaml
Current assumptions in these manifests:
- storage nodes are
ocp-infra-01,ocp-infra-02, andocp-infra-03 - those nodes carry the label
cluster.ocs.openshift.io/openshift-storage= - those nodes keep the
node-role.kubernetes.io/infra=label, but the only required ODF taint is:node.ocs.openshift.io/storage=true:NoSchedule
- LSO discovery currently reports one usable ODF disk per infra node:
/dev/sda- size about
1 TiB
- the root/OS disk is not selected
What each manifest does:
-
LocalVolumeDiscovery- runs only on infra nodes
- tolerates the ODF storage taint
- is the known-good discovery config for this lab
-
LocalVolumeSet- that creates a local block storage class named
ceph-osd - targets only nodes labeled for ODF storage
- tolerates the ODF storage taint so the provisioner runs on the storage nodes
- constrains disk size to the expected 1 TiB data devices
- uses
maxDeviceCount: 1so only the extra ODF disk is claimed per infra node
- that creates a local block storage class named
-
StorageCluster- builds ODF on top of
ceph-osd - targets only ODF-labeled infra nodes
- tolerates the ODF storage taint
- uses
portable: false - uses
replica: 3for the three infra nodes
- builds ODF on top of
Warning
Confirm all three items below before applying these manifests. If discovery results have changed since the manifests were staged, the ODF deployment will fail or claim the wrong devices.
Before apply, confirm:
cluster.ocs.openshift.io/openshift-storage=is present on all three infra nodesLocalVolumeDiscoveryResultfor each infra node still shows/dev/sdaasAvailable- the live
LocalVolumeSetnamedceph-osdshould be patched to match the saved manifest