Openstack 中Ceph 删除OSD的方法:
#systemctl stop ceph-osd@xx
#ceph osd out osd.xx
#ceph osd crush remove osd.xx
#ceph osd rm osd.xx
#ceph auth del osd.xx
Openstack 中Ceph 删除osd pools的方法:
#rados rmpool rbd rbd --yes-i-really-really-mean-it
Openstack Ceph-disk 创建Ceph osd的方法是:
#ceph-disk zap /dev/sdd
#ceph-disk prepare --cluster cluster-name --cluster-uuid fsid --fs-type xfs data-path journal-path # 部署osd
举例:
ceph-disk prepare --cluster ceph --cluster-uuid 7a32022b-8c49-455a-93c9-3410f860aa2d --fs-type xfs /dev/sdd /dev/sdb1
#ceph-disk activate {data-paht} #激活osd
举例:
ceph-disk activate /dev/sdd1
备注:
没有设置journal日志盘的做法:
#ceph-disk prepare --cluster cluster-name --cluster-uuid fsid --fs-type xfs data-path # 部署osd
举例:
ceph-disk prepare --cluster ceph --cluster-uuid 7a32022b-8c49-455a-93c9-3410f860aa2d --fs-type xfs /dev/sdd