One of the key features of Ceph is its self-repairing and self-healing qualities. Ceph does this by keeping multiple copies of placement groups across different OSDs and ensures very high probability that you will not lose your data. In a very rare case, you may see the failure of multiple OSDs, where one or more PG replicas are on a failed OSD, and the PG state becomes incomplete, which leads to errors in the cluster health. For granular recovery, Ceph provides a low level PG and object data recovery tool known as
ceph-objectstore-tool can be a risky operation, and the command needs to be run either as root or
sudo. Do not attempt this on a production cluster without engaging the ...