1.2 磁盘Disk
a) 查看集群节点上有多少磁盘 # ceph-deploy disk list <节点Host名> b) 在单个节点上查询节点的磁盘信息 # sudo /usr/sbin/ceph-disk list c) 参看块设备信息 # lsblk 1.3 配置信息推送 a) Copy ceph.conf to/from remote host(s) # ceph-deploy config [-h] {push,pull} # ceph-deploy config push node1 node2 node3 d) 推动配置文件和admin key # ceph-deploy admin admnode node1 node2 node3 (2) OSD 相关 2.1 OSD信息 a) 查看所有OSD状态 # ceph osd tree 2.2 OSD配置相关2.3 创建OSD
a) List Disks on a node # ceph-deploy disk list {node-name [node-name]...} b) prepare the OSDs and deploy them to the OSD node(s). # ceph-deploy osd prepare {node-name}:{data-disk}[:{journal-disk}] # ceph-deploy osd prepare osdserver1:sdb:/dev/ssd c) Once you prepare an OSD you may activate it # ceph-deploy osd activate {node-name}:{data-disk-partition}[:{journal-disk-partition}] # ceph-deploy osd activate osdserver1:/dev/sdb1:/dev/ssd1 d) The create command is a convenience method for executing the prepare and activate command sequentially # ceph-deploy osd create {node-name}:{disk}[:{path/to/journal}] # ceph-deploy osd create osdserver1:sdb:/dev/ssd1 2.4 删除OSD a) Take the OSD out of the Cluster # ceph osd out {osd-num} b) Stopping your OSD before you remove it from the configuration.Once you stop your OSD, it is down. # ssh {osd-host} # sudo /etc/init.d/ceph stop osd.{osd-num} c) Removing the OSD c.1) Remove the OSD from the CRUSH map # ceph osd crush remove {The full name of the OSD, eg osd.0} c.2) Remove the OSD authentication key. # ceph auth del osd.{osd-num} c.3) Remove the OSD # ceph osd rm {osd-num} # ceph osd rm 1 ==》 output: ceph osd rm 1 c.4) 如果在管理节点存在该ceph.conf 文件中相关OSD的配置信息(例如 [osd.1]配置区域),删除它。 # ssh {admin-host} # vim /etc/ceph/ceph.conf c.5) 如果执行了c.4步骤后,需要将更新后的ceph.conf推送到其他所有的节点,更新其他节点的ceph.conf文件 2.5 启动OSD服务 #sudo /etc/init.d/ceph -a start osd.{osd-num} (3) Monitor相关 3.1 配置文件 Monitor Config Reference. 3.2 新增Monitor(4) 日志相关 4.1 查看相关日志 # sudo vim /var/log/ceph/ceph*.log (5) 服务启动
(6) Block Device Image