How to do it...

These steps will walk you through the proper replacement process of a Ceph OSD:

  1. Let's verify cluster health; since this cluster does not have any failed disk status, it would be HEALTH_OK: 
# ceph status
  1. Since we are demonstrating this exercise on virtual machines, we need to forcefully fail a disk by bringing ceph-node1 down, detaching a disk, and powering up the VM. Execute the following commands from your HOST machine:
# VBoxManage controlvm ceph-node1 poweroff# VBoxManage storageattach ceph-node1 --storagectl "SATA" --port 1 --device 0 --type hdd --medium none# VBoxManage startvm ceph-node1

The following screenshot ...

Get Ceph: Designing and Implementing Scalable Storage Systems now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.