Ceph haldamine - cehpadm
Sissejuhatus
Aadressil https://docs.ceph.com/en/quincy/install/ kirjeldatakse mitmeid võimalusi ceph lahenduse haldamiseks, tundub, et 2022 aasta kevadel on üks populaarsemaid nn cephadm.
Tööpõhimõte
cephadm lahendusele on iseloomulik
- haldus toimub cephadm utiliidi abi
- ceph komponendid töötavad konteinerites (nt docker); konteinerite pidamise platvorm ei ole cephadm koosseisus, aga sealt edasi on (tõmmised, konteinerid, võrk jne)
- mgr komponent on oluline süsteemi osa, nt saab ja isegi on eelistatud ceph ressursside haldus läbi nn Dashboard webgui liidese
Ettavalmistamine
Kasutamiseks sobib nt Ubuntu 22.04 operatsioonisüsteem, kus pea olema
- docker
# apt-get install docker.io
- TODO
Paigaldamine
Ubuntu 22.04 keskkonnas sobib öelda cephadm paigaldamiseks
# apt-get install cephadm
seejärel
TODO
Haldamine
TODO
Käsurealt
# cephadm shell
kus saab seejärel nö kasutada käsku ceph, nt
cs # ceph -s cluster: id: f2c7bfa6-de94-11ec-9ce3-dd734d1a236b health: HEALTH_WARN 5 daemons have recently crashed services: mon: 5 daemons, quorum ca-0,ca-1,ca-2,ca-3,ca-4 (age 22h) mgr: ca-0.snatqq(active, since 28h), standbys: ca-1.bhhbmr osd: 4 osds: 4 up (since 26h), 4 in (since 27h) rgw: 2 daemons active (2 hosts, 1 zones) rgw-nfs: 1 daemon active (1 hosts, 1 zones) data: pools: 10 pools, 289 pgs objects: 5.65k objects, 14 GiB usage: 44 GiB used, 116 GiB / 160 GiB avail pgs: 289 active+clean io: client: 170 B/s rd, 0 op/s rd, 0 op/s wr
Node nimekirja esitamine
root@ca-0:/# ceph orch host ls HOST ADDR LABELS STATUS ca-0 192.168.110.240 _admin ca-1 192.168.110.241 osd ca-2 192.168.110.242 osd ca-3 192.168.110.243 osd ca-4 192.168.110.244 osd 5 hosts in cluster
Node eemaldamine
TODO
OSD nimekirja esitamine
root@ca-0:/# ceph osd tree ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF -1 0.15637 root default -3 0.03909 host ca-1 0 hdd 0.03909 osd.0 up 1.00000 1.00000 -5 0.03909 host ca-2 1 hdd 0.03909 osd.1 up 1.00000 1.00000 -7 0.03909 host ca-3 2 hdd 0.03909 osd.2 up 1.00000 1.00000 -9 0.03909 host ca-4 3 hdd 0.03909 osd.3 up 1.00000 1.00000
OSD eemaldamine
root@ca-0:/# ceph orch osd rm 3 Scheduled OSD(s) for removal root@ca-0:/# ceph orch osd rm status OSD HOST STATE PGS REPLACE FORCE ZAP DRAIN STARTED AT 3 ca-4 draining 36 False False False 2022-05-29 19:14:10.549604
Kui status ei näita, et midagi on pooleli (samuti webgui ja 'ceph status') sobib öelda
root@ca-0:/# ceph orch device zap ca-4 /dev/vdb --force zap successful for /dev/vdb on ca-4
Tulemusena ca-4 arvutis ei ole enam ceph storage jaoks lvm vg. Seejärel sobib öelda
root@ca-0:/# ceph orch host rm ca-4 Error EINVAL: Not allowed to remove ca-4 from cluster. The following daemons are running in the host: type id -------------------- --------------- crash ca-4 node-exporter ca-4 mon ca-4 Please run 'ceph orch host drain ca-4' to remove daemons from host root@ca-0:/# ceph orch host drain ca-4 Scheduled to remove the following daemons from host 'ca-4' type id -------------------- --------------- crash ca-4 node-exporter ca-4 mon ca-4
Võib olla veel abistada 'ceph orch apply mon "ca-0,ca-1,ca-2,ca-3"' käsuga
root@ca-0:/# ceph orch host rm ca-4 Removed host 'ca-4'
Tulemusena on konternerite protsessid kadunud ca-4 node pealt ja nimekiri
root@ca-0:/# ceph orch host ls HOST ADDR LABELS STATUS ca-0 192.168.110.240 _admin ca-1 192.168.110.241 osd ca-2 192.168.110.242 osd ca-3 192.168.110.243 osd 4 hosts in cluster
Monitoride komplekti määratlemine, ja tulemuse kontrollimine 'ceph -s' abil
root@ca-0:/# ceph orch apply mon "ca-0,ca-1,ca-2,ca-3" Scheduled mon update...
Dokcer konteinerite nimekirja küsimine, igas arvutis
root@ca-0:~# docker ps
OSD ja seejuures host lisamine
root@ca-0:/# ceph orch host add ca-4 Added host 'ca-4' with addr '192.168.110.244' root@ca-0:/# ceph orch daemon add osd ca-4:/dev/vdb Created osd(s) 3 on host 'ca-4'
Misc käsud
root@ca-0:/# ceph orch device ls HOST PATH TYPE DEVICE ID SIZE AVAILABLE REJECT REASONS ca-1 /dev/vdb hdd 42.9G Insufficient space (<10 extents) on vgs, LVM detected, locked ca-2 /dev/vdb hdd 42.9G Insufficient space (<10 extents) on vgs, LVM detected, locked ca-3 /dev/vdb hdd 42.9G Insufficient space (<10 extents) on vgs, LVM detected, locked ca-4 /dev/vdb hdd 42.9G Insufficient space (<10 extents) on vgs, LVM detected, locked
ja
root@ca-0:/# ceph orch ls NAME PORTS RUNNING REFRESHED AGE PLACEMENT alertmanager ?:9093,9094 1/1 2m ago 4h count:1 crash 5/5 9m ago 4h * grafana ?:3000 1/1 2m ago 4h count:1 mgr 2/2 9m ago 4h count:2 mon 4/4 9m ago 5m ca-0;ca-1;ca-2;ca-3 nfs.nfs_service 1/1 2m ago 1h ca-0;ca-1;ca-2;count:1 node-exporter ?:9100 5/5 9m ago 4h * osd 4 9m ago - <unmanaged> prometheus ?:9095 1/1 2m ago 4h count:1 rgw.rgw_essa ?:80 2/2 9m ago 1h count:2
Label lisamine
root@ca-0:/# ceph orch host label add ca-4 osd Added label osd to host ca-4
Haldamine - S3
Teenuse ettevalmistamiseks sobib öelda
root@ca-0:/# ceph orch apply rgw rgw_essa
Tulemusena tekitatakse webgui liideses
Object Gateway -> Daemons -> ...
ja seejärel webgui liideses
- tekitada kasutaja
Object Gateway -> Users
- tekitada bucket
Object Gateway -> Buckets ...
Kasutamiseks sobib klient arvutis öelda nt
# apt-get install s3fs # tail -n 1 /etc/passwd-s3fs bucketessa:COQIQ5AMGYK4RFBFN41C:3f3TLyaelVQxBnS7SGw7xRsRiqFtAJLKGXYSwKef # s3fs bucketessa /mnt/root -o passwd_file=/etc/passwd-s3fs -o url=http://192.168.110.240/ -o use_path_request_style
Haldamine - NFS
Tundub, et NFS sobib RO ligipääsu tekitamiseks Object Gateway kraamile, avada webgui liideses
NFS -> ...
kus
- Cluster - nfs_service
- Storage Backend - Object Gateway
- Bucket - bucketessa
- Pseudo - /bucketessa
- Access Type - RO
- Squash - no_root_squash
- Transport Protocol - TCP, UDP
- Clients - Any client could connect
Kasutamiseks sobib öelda lihtsal juhul
# mount 192.168.110.240:/bucketessa /mnt/bucketessa/ root@ria-etcd1:~# df -h | tail -n 2 s3fs 256T 0 256T 0% /mnt/root 192.168.110.240:/bucketessa 160G 44G 117G 28% /mnt/bucketessa
Misc
TODO
- prometheus - http://192.168.110.240:9095/alerts
- dashboard - https://192.168.110.240:8443/
- graphana - https://192.168.110.240:3000/
Kasulikud lisamaterjalid
- TODO