These steps will walk you through the proper replacement process of a Ceph OSD:
- Let's verify cluster health; since this cluster does not have any failed disk status, it would be HEALTH_OK:
# ceph status
- Since we are demonstrating this exercise on virtual machines, we need to forcefully fail a disk by bringing ceph-node1 down, detaching a disk, and powering up the VM. Execute the following commands from your HOST machine:
# VBoxManage controlvm ceph-node1 poweroff# VBoxManage storageattach ceph-node1 --storagectl "SATA" --port 1 --device 0 --type hdd --medium none# VBoxManage startvm ceph-node1
The following screenshot ...