欢迎访问我的博客,你的支持,是我最大的动力!

分布式文件系统Ceph部署-基于CentOS7

Linux 马从东 3041℃ 评论
目录:
[显示]

部署基于CentOS 7 使用Ceph版本为13.2.2

部署kubernetes持久化服务时,需要持久存储卷,Ceph是一种不错的解决方案。

官网:https://ceph.com/
官方安装文档:http://docs.ceph.com/docs/master/install/
下载地址:https://download.ceph.com/
RPM下载:https://download.ceph.com/rpm-mimic/el7/x86_64/

准备yum源

Ceph依赖第三方包
些第三方包在EPEL仓库中,所以需要先添加epel仓库

epel中使用到的软件包:leveldb libbabeltrace liboath lttng-ust python-pecan python-repoze-lru python-routes python-simplegeneric python-singledispatch python2-bcrypt python2-six userspace-rcu

要使用priority属性,需要安装yum-plugin-priorities,并确认/etc/yum/pluginconf.d/priorities.conf文件内enabled = 1

ceph中使用到的软件包:ceph ceph-base ceph-common ceph-mds ceph-mgr ceph-mon ceph-osd ceph-selinux libcephfs2 librados2 libradosstriper1 librbd1 librgw2 python-cephfs python-rados python-rbd python-rgw

yum clean all

安装ceph
ceph-deploy方式

ceph-deploy工具可以设置和销毁ceph集群,用于开发、测试和概念验证项目。可以使用单个命令在多台机器上部署Ceph

yum install ceph-deploy
# 该软件没有任何依赖

具体过种详见:http://docs.ceph.com/docs/master/start/

手动安装

http://docs.ceph.com/docs/master/install/install-storage-cluster/

安装依赖包
yum install snappy leveldb gdisk python-argparse gperftools-libs
安装ceph
yum install ceph

手动部署集群

http://docs.ceph.com/docs/master/install/manual-deployment/

Ceph集群至少需要一个monitor节点。部署monitor节点是部署集群的第一步,因为monitor节点为规划整个集群提供重要标准,如池副本数、OSD数、心跳间隔、身份验证等

如上图,node1为monitor节点,node2/node3为OSD节点

前置工作

修改主机名
echo "node1" > /etc/hostname   #192.168.10.110
echo "node2" > /etc/hostname   #192.168.10.120
echo "node3" > /etc/hostname   #192.168.10.130

修改hosts文件
echo "192.168.10.110 node1" >> /etc/hosts
echo "192.168.10.120 node2" >> /etc/hosts
echo "192.168.10.130 node3" >> /etc/hosts

重启

部署monitor角色_node1

1、ssh登录node1节点

2、确保存在/etc/ceph目录
ls /etc/ceph

3、创建配置文件/etc/ceph/ceph.conf
# 配置文件名为集群名称
配置文件格式:

[global]
fsid = {cluster-id}
mon initial members = {hostname}[, {hostname}]
mon host = {ip-address}[, {ip-address}]
public network = {network}[, {network}]
cluster network = {network}[, {network}]
auth cluster required = cephx
auth service required = cephx
auth client required = cephx
osd journal size = {n}
osd pool default size = {n} # Write an object n times.
osd pool default min size = {n} # Allow writing n copies in a degraded state.
osd pool default pg num = {n}
osd pool default pgp num = {n}
osd crush chooseleaf type = {n}

3.1、生成一个UUID
uuidgen
3.2、编辑ceph.conf配置文件

cat /etc/ceph/ceph.conf
[global]
fsid = f252cfce-d397-4885-89f7-307bc299a0f4
mon initial members = node1
mon host = 192.168.10.110
public network = 192.168.10.0/24
auth cluster required = cephx
auth service required = cephx
auth client required = cephx
osd journal size = 1024
osd pool default size = 3
osd pool default min size = 2
osd pool default pg num = 333
osd pool default pgp num = 333
osd crush chooseleaf type = 1

其中:
# 唯一标识符fsid
fsid = {UUID}
# 初始化monitor
mon initial members = {hostname}[,{hostname}]
# monitor的ip地址
mon host = {ip-address}[,{ip-address}]

4、创建密钥
4.1、创建monitor的密钥环和key
ceph-authtool --create-keyring /tmp/ceph.mon.keyring --gen-key -n mon. --cap mon 'allow *'
4.2、创建管理员密钥环,添加client.admin用户
ceph-authtool --create-keyring /etc/ceph/ceph.client.admin.keyring --gen-key -n client.admin --cap mon 'allow *' --cap osd 'allow *' --cap mds 'allow *' --cap mgr 'allow *'
4.3、创建bootstrap-osd密钥环,添加client.bootstrap-osd用户
ceph-authtool --create-keyring /var/lib/ceph/bootstrap-osd/ceph.keyring --gen-key -n client.bootstrap-osd --cap mon 'profile bootstrap-osd'
4.4、添加生成的key到ceph.mon.keyring
ceph-authtool --create-keyring /var/lib/ceph/bootstrap-osd/ceph.keyring --gen-key -n client.bootstrap-osd --cap mon 'profile bootstrap-osd'
ceph-authtool /tmp/ceph.mon.keyring --import-keyring /var/lib/ceph/bootstrap-osd/ceph.keyring
4.5、生成monitor map,使用主机名、ip和fsid
# monmaptool --create --add {hostname} {ip-address} --fsid {uuid} /tmp/monmap
monmaptool --create --add node1 192.168.10.110 --fsid f252cfce-d397-4885-89f7-307bc299a0f4 /tmp/monmap

5、在monitor上创建一个数据目录,属主为ceph
# mkdir /var/lib/ceph/mon/{cluster-name}-{hostname}
sudo -u ceph mkdir /var/lib/ceph/mon/ceph-node1

6、初始化/var/lib/ceph/mon/ceph-node1目录下数据
# sudo -u ceph ceph-mon [--cluster {cluster-name}] --mkfs -i {hostname} --monmap /tmp/monmap --keyring /tmp/ceph.mon.keyring
sudo -u ceph ceph-mon --mkfs -i node1 --monmap /tmp/monmap --keyring /tmp/ceph.mon.keyring

其中执行第6步如果提示权限不足,那么第5、6步也可以以root身份运行,最后修改属性
cd /var/lib/ceph/mon && chown -R ceph:ceph ceph-node1/

7、启动monitor
monitor会监听端口6789
systemctl start ceph-mon@node1
systemctl enable ceph-mon@node1
systemctl status ceph-mon@node1
# systemctl reset-failed ceph-mon@node1.service  #重载

问题解决:
启动报错:unable to read magic from mon data
解决:chown ceph:ceph /tmp/monmap

8、检查服务是否正常
ceph -s

通常运行monitor守护进程的机器上会部署mgr管理的守护进程

部署mgr角色_node1
创建身份验证密钥
ceph auth get-or-create mgr.node1 mon 'allow profile mgr' osd 'allow *' mds 'allow *'
[mgr.node1]
key = AQDfd79bVZ3sLRAABpDYe+WS2OUmsmsStyTgpQ==
将密钥写入到文件
mkdir -p /var/lib/ceph/mgr/ceph-node1
cat /var/lib/ceph/mgr/ceph-node1/keyring
[mgr.node1]
key = AQDfd79bVZ3sLRAABpDYe+WS2OUmsmsStyTgpQ==
启动mgr
chown -R ceph:ceph /var/lib/ceph/mgr
ceph-mgr -i node1
启动
systemctl start ceph-mgr@node1
systemctl enable ceph-mgr@node1
systemctl status ceph-mgr@node1

常用命令:

# 查看模块情况 启用/禁用的模块
ceph mgr module ls
# 启用模块
ceph mgr module enable <module>
# 禁用模块
ceph mgr module disable <module>
# 查看模块地址 提供http服务的模块
ceph mgr services
# 启用仪表板功能
ceph mgr module enable dashboard
ceph mgr services
# ceph config-key put mgr/dashboard/server_addr 192.168.10.110
# ceph config-key put mgr/dashboard/server_port 7000
部署OSD_node2/node3

ceph后端支持多种存储引擎,以插件化形式进行管理使用,包括:filestore、bluestore、kvstore、memstore等
filestore:
在写数据前需要先写journal,会有一倍的写放大,针对sata/sas这类机械盘设计
bluestore:
没有写放大,直接管理裸盘,减少文件系统部分的开销,对ssd做了优化

bluestore引擎

从node1复制配置文件到node2/node3
/etc/ceph/ceph.client.admin.keyring
/etc/ceph/ceph.conf
/var/lib/ceph/bootstrap-osd/ceph.keyring
创建
ceph-volume lvm create --data /dev/sdb
# 也可分为两步:
# 1准备:ceph-volume lvm prepare --data {data-path} {data-path} 查看:ceph-volume lvm list ##osd.1中的1即为下一步的ID
# 2激活:ceph-volume lvm activate {ID} {FSID}
实际执行的过程:
Running command: /bin/ceph-authtool --gen-print-key
Running command: /bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new f16212df-4cfb-46ad-897e-ea5fd98ff4bf
Running command: /usr/sbin/vgcreate --force --yes ceph-e23f1803-b403-4519-8c24-8ccc8e3281d7 /dev/sdb
stdout: Physical volume "/dev/sdb" successfully created.
stdout: Volume group "ceph-e23f1803-b403-4519-8c24-8ccc8e3281d7" successfully created
Running command: /usr/sbin/lvcreate --yes -l 100%FREE -n osd-block-f16212df-4cfb-46ad-897e-ea5fd98ff4bf ceph-e23f1803-b403-4519-8c24-8ccc8e3281d7
stdout: Logical volume "osd-block-f16212df-4cfb-46ad-897e-ea5fd98ff4bf" created.
Running command: /bin/ceph-authtool --gen-print-key
Running command: /bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-1
Running command: /usr/sbin/restorecon /var/lib/ceph/osd/ceph-1
Running command: /bin/chown -h ceph:ceph /dev/ceph-e23f1803-b403-4519-8c24-8ccc8e3281d7/osd-block-f16212df-4cfb-46ad-897e-ea5fd98ff4bf
Running command: /bin/chown -R ceph:ceph /dev/dm-2
Running command: /bin/ln -s /dev/ceph-e23f1803-b403-4519-8c24-8ccc8e3281d7/osd-block-f16212df-4cfb-46ad-897e-ea5fd98ff4bf /var/lib/ceph/osd/ceph-1/block
Running command: /bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-1/activate.monmap
stderr: got monmap epoch 1
Running command: /bin/ceph-authtool /var/lib/ceph/osd/ceph-1/keyring --create-keyring --name osd.1 --add-key AQC6j79bUNjxEBAAAlq2wK8YI5A/I9zCXwfRaQ==
stdout: creating /var/lib/ceph/osd/ceph-1/keyring
stdout: added entity osd.1 auth auth(auid = 18446744073709551615 key=AQC6j79bUNjxEBAAAlq2wK8YI5A/I9zCXwfRaQ== with 0 caps)
Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/keyring
Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/
Running command: /bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 1 --monmap /var/lib/ceph/osd/ceph-1/activate.monmap --keyfile - --osd-data /var/lib/ceph/osd/ceph-1/ --osd-uuid f16212df-4cfb-46ad-897e-ea5fd98ff4bf --setuser ceph --setgroup ceph
--> ceph-volume lvm prepare successful for: /dev/sdb
Running command: /bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph-e23f1803-b403-4519-8c24-8ccc8e3281d7/osd-block-f16212df-4cfb-46ad-897e-ea5fd98ff4bf --path /var/lib/ceph/osd/ceph-1 --no-mon-config
Running command: /bin/ln -snf /dev/ceph-e23f1803-b403-4519-8c24-8ccc8e3281d7/osd-block-f16212df-4cfb-46ad-897e-ea5fd98ff4bf /var/lib/ceph/osd/ceph-1/block
Running command: /bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block
Running command: /bin/chown -R ceph:ceph /dev/dm-2
Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Running command: /bin/systemctl enable ceph-volume@lvm-1-f16212df-4cfb-46ad-897e-ea5fd98ff4bf
stderr: Created symlink from /etc/systemd/system/multi-user.target.wants/ceph-volume@lvm-1-f16212df-4cfb-46ad-897e-ea5fd98ff4bf.service to /usr/lib/systemd/system/ceph-volume@.service.
Running command: /bin/systemctl enable --runtime ceph-osd@1
stderr: Created symlink from /run/systemd/system/ceph-osd.target.wants/ceph-osd@1.service to /usr/lib/systemd/system/ceph-osd@.service.
Running command: /bin/systemctl start ceph-osd@1
--> ceph-volume lvm activate successful for osd ID: 1
--> ceph-volume lvm create successful for: /dev/sdb

filestore引擎

从node1复制配置文件到node2/node3
/etc/ceph/ceph.client.admin.keyring
/etc/ceph/ceph.conf
/var/lib/ceph/bootstrap-osd/ceph.keyring
创建
ceph-volume lvm create --filestore --data /dev/sdb --journal /dev/sdc
# 也可分为两步:
# 1准备:ceph-volume lvm prepare --filestore --data {data-path} --journal {journal-path}
# 查看:ceph-volume lvm list ##osd.1中的1即为下一步的ID
# 2激活:ceph-volume lvm activate --filestore {ID} {FSID}
部署MDS(元数据服务器)

元数据服务器MDS为Ceph文件系统存储元数据,Ceph块设备和对象存储不使用MDS

{id}是任意的名称,如hostname
# 创建mds数据目录
mkdir -p /var/lib/ceph/mds/{cluster-name}-{id}
# 创建密钥
ceph-authtool --create-keyring /var/lib/ceph/mds/{cluster-name}-{id}/keyring --gen-key -n mds.{id}
# 授权
ceph auth add mds.{id} osd "allow rwx" mds "allow" mon "allow profile mds" -i /var/lib/ceph/mds/{cluster}-{id}/keyring
# 修改配置文件ceph.conf
[mds.{id}]
host = {id}
# 启动
手动
ceph-mds --cluster {cluster-name} -i {id} -m {mon-hostname}:{mon-port} [-f]
依托ceph.conf侵入
systemctl start ceph #service ceph start

部署RGW(对象网关)

如果要使用Ceph的对象存储,就需要部署rgw网关
http://docs.ceph.com/docs/master/install/install-ceph-gateway/

状态查看

在node1结点:
ceph -s
ceph -w
ceph osd tree

Ceph使用

使用Ceph为Kubernetes提供PersistentVolume方式的网络存储卷

创建pool
Ceph集群可以有多个pool,pool是逻辑上的隔离单位,不同pool可以有完全不一样的数据处理方式,如replica size、placement groups、crush rules、快照、属主等
# 显示pool列表
ceph osd lspools
# 创建pool
1、覆盖默认pg_num,少于5个OSD,所以设置为128,修改ceph.conf
osd pool default pg num = 333
osd pool default pgp num = 333
2、创建
语法:ceph osd pool create {pool-name} {pg-num} [{pgp-num}] [replicated] [crush-ruleset-name] [expected-num-objects]
ceph osd pool create test-pool 128
3、常用命令
# 查看pool状态
rados df
# 删除pool
ceph osd pool delete test-pool test-pool --yes-i-really-really-mean-it

创建rbd默认pool:rbd
ceph osd pool create rbd 50
创建名为demo_volume_0的数据卷
rbd create demo_volume_0 --size 10240 -m 192.168.10.110 --keyring /etc/ceph/ceph.client.admin.keyring
禁用kubernetes不需要的RDB特性(防止挂载数据卷失败)
rbd feature disable demo_volume_0 exclusive-lock object-map fast-diff deep-flatten

image初始化/格式化

# 查看已创建的rbd列表
rbd list
demo_volume_0
# 将image映射为块设备
rbd map demo_volume_0
/dev/rbd0
# 格式化
mkfs.ext4 /dev/rbd0
# 从内核中unmap
rbd unmap demo_volume_0

参考:

centos 7.3 快速安装ceph[ceph-deploy方式]:https://www.cnblogs.com/ytc6/p/7388654.html

转载请注明:轻风博客 » 分布式文件系统Ceph部署-基于CentOS7

喜欢 (0)or分享 (0)