ZFS kasutamine operatsioonisüsteemiga Ubuntu 18.04

Allikas: Imre kasutab arvutit
Mine navigeerimisribaleMine otsikasti

Sissejuhatus

TODO

Tõõpõhimõte

TODO

  • erinevalt mdadm raidist ei toimu taustal vaikimisi kord kuus mirrori lõhkumine ja uuesti ehitamine

Tarkvara paigaldamine

# apt-get install zfsutils-linux

Kasutamine

Pool tekitamine

# zpool create -o ashift=12 tank /dev/vdb

Tulemus

# zpool list
NAME   SIZE  ALLOC   FREE  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  ALTROOT
tank  21.9G   468K  21.9G         -     0%     0%  1.00x  ONLINE  -

taustal tekib kaks partitsiooni

# fdisk /dev/sdb -l
Disk /dev/sdb: 1 TiB, 1099511627776 bytes, 2147483648 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 16384 bytes / 16777216 bytes
Disklabel type: gpt
Disk identifier: 459DC343-EA59-DC4F-AD3F-C2033AB84C2A

Device          Start        End    Sectors  Size Type
/dev/sdb1        2048 2147465215 2147463168 1024G Solaris /usr & Apple ZFS
/dev/sdb9  2147465216 2147481599      16384    8M Solaris reserved 1

Suurendamine

# zpool set autoexpand=on tank

suurendada plokkseade ja öelda partprobe

# partprobe /dev/vdb

vaadata tulemust

 # zpool status -v
  pool: tank
 state: ONLINE
  scan: none requested
config:

	NAME        STATE     READ WRITE CKSUM
	tank        ONLINE       0     0     0
	  vdb       ONLINE       0     0     0

errors: No known data errors

Mountpoint määramine, nt

# zfs set mountpoint=/srv/vmail tank

NFS

Serveris

# zfs set sharenfs=on tank/imre-1

Kliendis

# mount 192.168.110.89:/srv/imre-yks /mnt
# mount
..
192.168.110.89:/srv/imre-yks on /mnt type nfs4 (rw,relatime,vers=4.2,rsize=262144,wsize=262144,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=192.168.110.33,local_lock=none,addr=192.168.110.89)

Märkused

  • systemctl zfs asjad
# systemctl | grep zfs
● zfs-import-cache.service                loaded failed failed    Import ZFS pools by cache file                                               
  zfs-load-module.service                 loaded active exited    Install ZFS kernel module                                                    
  zfs-mount.service                       loaded active exited    Mount ZFS filesystems                                                        
  zfs-share.service                       loaded active exited    ZFS file system shares                                                       
  zfs-zed.service                         loaded active running   ZFS Event Daemon (zed)                                                       
  zfs-import.target                       loaded active active    ZFS pool import target                                                       
  zfs.target                              loaded active active    ZFS startup target     
  • peale 'zpool destory tank' ütlemist ja arvuti järgmist rebooti paistab selline olukord
root@portaal-to-maildir-1a:~# systemctl status zfs-import-cache.service
● zfs-import-cache.service - Import ZFS pools by cache file
   Loaded: loaded (/lib/systemd/system/zfs-import-cache.service; enabled; vendor preset: enabled)
   Active: failed (Result: exit-code) since Sat 2019-02-16 20:22:36 EET; 1min 8s ago
     Docs: man:zpool(8)
  Process: 572 ExecStart=/sbin/zpool import -c /etc/zfs/zpool.cache -aN (code=exited, status=1/FAILURE)
 Main PID: 572 (code=exited, status=1/FAILURE)

Feb 16 20:22:34 portaal-to-maildir-1a systemd[1]: Starting Import ZFS pools by cache file...
Feb 16 20:22:36 portaal-to-maildir-1a zpool[572]: cannot import 'tank': no such pool or dataset
Feb 16 20:22:36 portaal-to-maildir-1a zpool[572]:         Destroy and re-create the pool from
Feb 16 20:22:36 portaal-to-maildir-1a zpool[572]:         a backup source.
Feb 16 20:22:36 portaal-to-maildir-1a systemd[1]: zfs-import-cache.service: Main process exited, code=exited, status=1/FAILURE
Feb 16 20:22:36 portaal-to-maildir-1a systemd[1]: zfs-import-cache.service: Failed with result 'exit-code'.
Feb 16 20:22:36 portaal-to-maildir-1a systemd[1]: Failed to start Import ZFS pools by cache file.

ZFS puhul on võimalus kasutada acl'i

TODO

ZFS ketaste tõstmine ühest arvutist teise

Väited

  • võiks jälgida zfs versioone
# zpool upgrade -v
This system supports ZFS pool feature flags.

The following features are supported:

FEAT DESCRIPTION
-------------------------------------------------------------
async_destroy                         (read-only compatible)
     Destroy filesystems asynchronously.
empty_bpobj                           (read-only compatible)

...

The following legacy versions are also supported:

VER  DESCRIPTION
---  --------------------------------------------------------
 1   Initial ZFS version
 2   Ditto blocks (replicated metadata)
 ...

 25  Improved scrub stats
 26  Improved snapshot deletion performance
 27  Improved snapshot creation performance
 28  Multiple vdev replacements

For more information on a particular version, including supported releases,
see the ZFS Administration Guide.

Millised poolid on lisatud ketaste tõttu kasutatavad

# zpool import
   pool: sn_zfs_ssd
     id: 2639366720711406682
  state: ONLINE
 status: The pool was last accessed by another system.
 action: The pool can be imported using its name or numeric identifier and
	the '-f' flag.
   see: http://zfsonlinux.org/msg/ZFS-8000-EY
 config:

	sn_zfs_ssd  ONLINE
	  sda       ONLINE
	  sdb       ONLINE

Kasutusse võtmine

# zpool import -f sn_zfs_ssd
# zpool list
NAME         SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
sn_zfs_ssd  1.81T   988G   868G        -         -    23%    53%  1.00x    ONLINE  -

Monteerimine alternatiivsesse asukohta

# mkdir /mnt/bpool
# zpool import -R /mnt/bpool bpool

Checkpoint kasutamine

TODO

RAIDZ1 lülituse ketaste asendamine suurematega

Lähtepunktiks on

TODO

Sihtpunktiks on

root@pm60-trt:~# zpool status zpool_wdc
  pool: zpool_wdc
 state: ONLINE
status: One or more devices has experienced an unrecoverable error.  An
	attempt was made to correct the error.  Applications are unaffected.
action: Determine if the device needs to be replaced, and clear the errors
	using 'zpool clear' or replace the device with 'zpool replace'.
   see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-9P
  scan: resilvered 2.61T in 10:45:50 with 0 errors on Sat Dec 18 22:25:31 2021
config:

	NAME                                  STATE     READ WRITE CKSUM
	zpool_wdc                             ONLINE       0     0     0
	  raidz1-0                            ONLINE       0     0     0
	    ata-WDC_WUS721010ALE6L4_VCHERZ5P  ONLINE      12     0     0
	    ata-WDC_WUS721010ALE6L4_VCHG26ZP  ONLINE       0     0     0
	    ata-WDC_WUS721010ALE6L4_VCHG127P  ONLINE       0     0     0
	    ata-WDC_WUS721010ALE6L4_VCHG0KBP  ONLINE       0     0     0

errors: No known data errors

Lähtepunktist sihtpunkti jõudmiseks sobib öelda

# zpool replace zpool_wdc /dev/sdf /dev/disk/by-id/ata-WDC_WUS721010ALE6L4_VCHG0KBP

ja siis paistab olukord selline

root@pm60-trt:~# zpool status zpool_wdc
  pool: zpool_wdc
 state: ONLINE
status: One or more devices is currently being resilvered.  The pool will
	continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
  scan: resilver in progress since Sat Dec 18 11:39:41 2021
	9.92T scanned at 312M/s, 9.02T issued at 284M/s, 10.4T total
	2.26T resilvered, 86.56% done, 01:26:19 to go
config:

	NAME                                    STATE     READ WRITE CKSUM
	zpool_wdc                               ONLINE       0     0     0
	  raidz1-0                              ONLINE       0     0     0
	    ata-WDC_WUS721010ALE6L4_VCHERZ5P    ONLINE      12     0     0  (resilvering)
	    ata-WDC_WUS721010ALE6L4_VCHG26ZP    ONLINE       0     0     0
	    ata-WDC_WUS721010ALE6L4_VCHG127P    ONLINE       0     0     0
	    replacing-3                         ONLINE       0     0     0
	      sdf                               ONLINE       0     0     0
	      ata-WDC_WUS721010ALE6L4_VCHG0KBP  ONLINE       0     0     0  (resilvering)

errors: No known data errors

Misc

Toimuva jälgimine

root@pbs:~# zpool iostat -vyl 1 2
                                            capacity     operations     bandwidth    total_wait     disk_wait    syncq_wait    asyncq_wait  scrub   trim
pool                                      alloc   free   read  write   read  write   read  write   read  write   read  write   read  write   wait   wait
----------------------------------------  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----
zpool_wdc                                 31.4T  4.94T  5.50K      0   874M      0    1ms      -    1ms      -      -      -      -      -  495us      -
  raidz1-0                                31.4T  4.94T  5.50K      0   874M      0    1ms      -    1ms      -      -      -      -      -  495us      -
    scsi-0QEMU_QEMU_HARDDISK_drive-scsi0      -      -    237      0   219M      0   24ms      -   12ms      -      -      -      -      -    8ms      -
    scsi-0QEMU_QEMU_HARDDISK_drive-scsi2      -      -  1.76K      0   218M      0  794us      -  699us      -      -      -      -      -  144us      -
    scsi-0QEMU_QEMU_HARDDISK_drive-scsi1      -      -  1.75K      0   219M      0  789us      -  699us      -      -      -      -      -  135us      -
    scsi-0QEMU_QEMU_HARDDISK_drive-scsi3      -      -  1.75K      0   218M      0  839us      -  736us      -      -      -      -      -  147us      -
----------------------------------------  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----
                                            capacity     operations     bandwidth    total_wait     disk_wait    syncq_wait    asyncq_wait  scrub   trim
pool                                      alloc   free   read  write   read  write   read  write   read  write   read  write   read  write   wait   wait
----------------------------------------  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----
zpool_wdc                                 31.4T  4.94T  4.52K    203   793M  1.50M    2ms    3ms    1ms  894us  512ns  408ns      -    2ms  618us      -
  raidz1-0                                31.4T  4.94T  4.52K    203   793M  1.50M    2ms    3ms    1ms  894us  512ns  408ns      -    2ms  618us      -
    scsi-0QEMU_QEMU_HARDDISK_drive-scsi0      -      -    228     48   199M   391K   23ms    3ms   11ms  842us  384ns  480ns      -    2ms    7ms      -
    scsi-0QEMU_QEMU_HARDDISK_drive-scsi2      -      -  1.50K     44   197M   359K    1ms    3ms  840us    1ms  768ns  384ns      -    1ms  207us      -
    scsi-0QEMU_QEMU_HARDDISK_drive-scsi1      -      -  1.47K     53   199M   395K    1ms    3ms  856us  793us      -  384ns      -    2ms  207us      -
    scsi-0QEMU_QEMU_HARDDISK_drive-scsi3      -      -  1.32K     55   198M   391K    1ms    3ms    1ms  849us  384ns  384ns      -    3ms  373us      -

kus

  • -y - ära esita esimese väljundi komplektina alates süsteemi boodist kohta käivat statistikat
  • -l - esita latentsuse statistika lisaks
  • esimene arv - väljundi esitamise intervall
  • teine arv - mitu korda statistikat esitatakse

Samal ajal paistab nö tavalise iostat väljundis midagi sellist

root@pbs:~# iostat -xy 1
Linux 5.15.74-1-pve (pbs) 	01/09/2023 	_x86_64_	(4 CPU)

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           1.01    0.00   18.34    0.00    0.00   80.65

Device            r/s     rkB/s   rrqm/s  %rrqm r_await rareq-sz     w/s     wkB/s   wrqm/s  %wrqm w_await wareq-sz     d/s     dkB/s   drqm/s  %drqm d_await dareq-sz     f/s f_await  aqu-sz  %util
dm-0             0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    0.00   0.00
dm-1             0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    0.00   0.00
sda            227.00 230412.00     0.00   0.00   13.18  1015.03    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    2.99 100.00
sdb           1826.00 228952.00     0.00   0.00    0.62   125.38    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    1.14 100.00
sdc           1441.00 231720.00     0.00   0.00    1.10   160.80    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    1.58 100.00
sdd           1595.00 230336.00     0.00   0.00    0.87   144.41    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    1.39 100.00
sr0              0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    0.00   0.00
vda              0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    0.00   0.00
vdb              0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    0.00   0.00
vdc              0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    0.00   0.00
vdd              0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    0.00   0.00
zd0              0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    0.00   0.00

kus

  • util tulp väärtused on väga kõrged
  • lugemise kiirus vastab eelmise väljundi lugemise kiirusele, st > 200 MBait/s

Kasulikud lisamaterjalid