[Gluster-users] [EXT] [Glusterusers] Using glusterfs for virtual machines with qco

Guillaume Pavese guillaume.pavese at interactiv-group.com
Fri Jun 2 10:54:22 UTC 2023


On oVirt / Redhat Virtualization,
the following Gluster volumes settings are recommended to be applied
(preferably at the creation of the volume)
These settings are important for data reliability, ( Note that Replica 3 or
Replica 2+1 is expected )

performance.quick-read=off
performance.read-ahead=off
performance.io-cache=off
performance.low-prio-threads=32
network.remote-dio=enable
cluster.eager-lock=enable
cluster.quorum-type=auto
cluster.server-quorum-type=server
cluster.data-self-heal-algorithm=full
cluster.locking-scheme=granular
cluster.shd-max-threads=8
cluster.shd-wait-qlength=10000
features.shard=on
user.cifs=off
cluster.choose-local=off
client.event-threads=4
server.event-threads=4
performance.client-io-threads=on




Guillaume Pavese
Ingénieur Système et Réseau
Interactiv-Group


On Fri, Jun 2, 2023 at 5:33 AM W Kern <wkmail at bneit.com> wrote:

> We use qcow2 with libvirt based kvm on many small clusters and have
> found it to be exremely reliable though maybe not the fastest, though
> some of that is most of our storage is SATA SSDs in a software RAID1
> config for each brick.
>
> What problems are you running into?
>
> You just mention 'problems'
>
> -wk
>
> On 6/1/23 8:42 AM, Christian Schoepplein wrote:
> > Hi,
> >
> > we'd like to use glusterfs for Proxmox and virtual machines with qcow2
> > disk images. We have a three node glusterfs setup with one volume and
> > Proxmox is attached and VMs are created, but after some time, and I think
> > after much i/o is going on for a VM, the data inside the virtual machine
> > gets corrupted. When I copy files from or to our glusterfs
> > directly everything is OK, I've checked the files with md5sum. So in
> general
> > our glusterfs setup seems to be OK I think..., but with the VMs and the
> self
> > growing qcow2 images there are problems. If I use raw images for the VMs
> > tests look better, but I need to do more testing to be sure, the problem
> is
> > a bit hard to reproduce :-(.
> >
> > I've also asked on a Proxmox mailinglist, but got no helpfull response so
> > far :-(. So maybe you have any helping hint what might be wrong with our
> > setup, what needs to be configured to use glusterfs as a storage backend
> for
> > virtual machines with self growing disk images. e.g. Any helpfull tip
> would
> > be great, because I am absolutely no glusterfs expert and also not a
> expert
> > for virtualization and what has to be done to let all components play
> well
> > together... Thanks for your support!
> >
> > Here some infos about our glusterfs setup, please let me know if you need
> > more infos. We are using Ubuntu 22.04 as operating system:
> >
> > root at gluster1:~# gluster --version
> > glusterfs 10.1
> > Repository revision: git://git.gluster.org/glusterfs.git
> > Copyright (c) 2006-2016 Red Hat, Inc. <https://www.gluster.org/>
> > GlusterFS comes with ABSOLUTELY NO WARRANTY.
> > It is licensed to you under your choice of the GNU Lesser
> > General Public License, version 3 or any later version (LGPLv3
> > or later), or the GNU General Public License, version 2 (GPLv2),
> > in all cases as published by the Free Software Foundation.
> > root at gluster1:~#
> >
> > root at gluster1:~# gluster v status gfs_vms
> >
> > Status of volume: gfs_vms
> > Gluster process                             TCP Port  RDMA Port  Online
> Pid
> >
> ------------------------------------------------------------------------------
> > Brick gluster1.linova.de:/glusterfs/sde1enc
> > /brick                                      58448     0          Y
>  1062218
> > Brick gluster2.linova.de:/glusterfs/sdc1enc
> > /brick                                      50254     0          Y
>  20596
> > Brick gluster3.linova.de:/glusterfs/sdc1enc
> > /brick                                      52840     0          Y
>  1627513
> > Brick gluster1.linova.de:/glusterfs/sdf1enc
> > /brick                                      49832     0          Y
>  1062227
> > Brick gluster2.linova.de:/glusterfs/sdd1enc
> > /brick                                      56095     0          Y
>  20612
> > Brick gluster3.linova.de:/glusterfs/sdd1enc
> > /brick                                      51252     0          Y
>  1627521
> > Brick gluster1.linova.de:/glusterfs/sdg1enc
> > /brick                                      54991     0          Y
>  1062230
> > Brick gluster2.linova.de:/glusterfs/sde1enc
> > /brick                                      60812     0          Y
>  20628
> > Brick gluster3.linova.de:/glusterfs/sde1enc
> > /brick                                      59254     0          Y
>  1627522
> > Self-heal Daemon on localhost               N/A       N/A        Y
>  1062249
> > Bitrot Daemon on localhost                  N/A       N/A        Y
>  3591335
> > Scrubber Daemon on localhost                N/A       N/A        Y
>  3591346
> > Self-heal Daemon on gluster2.linova.de      N/A       N/A        Y
>  20645
> > Bitrot Daemon on gluster2.linova.de         N/A       N/A        Y
>  987517
> > Scrubber Daemon on gluster2.linova.de       N/A       N/A        Y
>  987588
> > Self-heal Daemon on gluster3.linova.de      N/A       N/A        Y
>  1627568
> > Bitrot Daemon on gluster3.linova.de         N/A       N/A        Y
>  1627543
> > Scrubber Daemon on gluster3.linova.de       N/A       N/A        Y
>  1627554
> >
> > Task Status of Volume gfs_vms
> >
> ------------------------------------------------------------------------------
> > There are no active volume tasks
> >
> > root at gluster1:~#
> >
> > root at gluster1:~# gluster v status gfs_vms detail
> >
> > Status of volume: gfs_vms
> >
> ------------------------------------------------------------------------------
> > Brick                : Brick gluster1.linova.de:/glusterfs/sde1enc/brick
> > TCP Port             : 58448
> > RDMA Port            : 0
> > Online               : Y
> > Pid                  : 1062218
> > File System          : xfs
> > Device               : /dev/mapper/sde1enc
> > Mount Options        :
> rw,relatime,attr2,inode64,logbufs=8,logbsize=32k,noquota
> > Inode Size           : 512
> > Disk Space Free      : 3.6TB
> > Total Disk Space     : 3.6TB
> > Inode Count          : 390700096
> > Free Inodes          : 390699660
> >
> ------------------------------------------------------------------------------
> > Brick                : Brick gluster2.linova.de:/glusterfs/sdc1enc/brick
> > TCP Port             : 50254
> > RDMA Port            : 0
> > Online               : Y
> > Pid                  : 20596
> > File System          : xfs
> > Device               : /dev/mapper/sdc1enc
> > Mount Options        :
> rw,relatime,attr2,inode64,logbufs=8,logbsize=32k,noquota
> > Inode Size           : 512
> > Disk Space Free      : 3.6TB
> > Total Disk Space     : 3.6TB
> > Inode Count          : 390700096
> > Free Inodes          : 390699660
> >
> ------------------------------------------------------------------------------
> > Brick                : Brick gluster3.linova.de:/glusterfs/sdc1enc/brick
> > TCP Port             : 52840
> > RDMA Port            : 0
> > Online               : Y
> > Pid                  : 1627513
> > File System          : xfs
> > Device               : /dev/mapper/sdc1enc
> > Mount Options        :
> rw,relatime,attr2,inode64,logbufs=8,logbsize=32k,noquota
> > Inode Size           : 512
> > Disk Space Free      : 3.6TB
> > Total Disk Space     : 3.6TB
> > Inode Count          : 390700096
> > Free Inodes          : 390699673
> >
> ------------------------------------------------------------------------------
> > Brick                : Brick gluster1.linova.de:/glusterfs/sdf1enc/brick
> > TCP Port             : 49832
> > RDMA Port            : 0
> > Online               : Y
> > Pid                  : 1062227
> > File System          : xfs
> > Device               : /dev/mapper/sdf1enc
> > Mount Options        :
> rw,relatime,attr2,inode64,logbufs=8,logbsize=32k,noquota
> > Inode Size           : 512
> > Disk Space Free      : 3.4TB
> > Total Disk Space     : 3.6TB
> > Inode Count          : 390700096
> > Free Inodes          : 390699632
> >
> ------------------------------------------------------------------------------
> > Brick                : Brick gluster2.linova.de:/glusterfs/sdd1enc/brick
> > TCP Port             : 56095
> > RDMA Port            : 0
> > Online               : Y
> > Pid                  : 20612
> > File System          : xfs
> > Device               : /dev/mapper/sdd1enc
> > Mount Options        :
> rw,relatime,attr2,inode64,logbufs=8,logbsize=32k,noquota
> > Inode Size           : 512
> > Disk Space Free      : 3.4TB
> > Total Disk Space     : 3.6TB
> > Inode Count          : 390700096
> > Free Inodes          : 390699632
> >
> ------------------------------------------------------------------------------
> > Brick                : Brick gluster3.linova.de:/glusterfs/sdd1enc/brick
> > TCP Port             : 51252
> > RDMA Port            : 0
> > Online               : Y
> > Pid                  : 1627521
> > File System          : xfs
> > Device               : /dev/mapper/sdd1enc
> > Mount Options        :
> rw,relatime,attr2,inode64,logbufs=8,logbsize=32k,noquota
> > Inode Size           : 512
> > Disk Space Free      : 3.4TB
> > Total Disk Space     : 3.6TB
> > Inode Count          : 390700096
> > Free Inodes          : 390699658
> >
> ------------------------------------------------------------------------------
> > Brick                : Brick gluster1.linova.de:/glusterfs/sdg1enc/brick
> > TCP Port             : 54991
> > RDMA Port            : 0
> > Online               : Y
> > Pid                  : 1062230
> > File System          : xfs
> > Device               : /dev/mapper/sdg1enc
> > Mount Options        :
> rw,relatime,attr2,inode64,logbufs=8,logbsize=32k,noquota
> > Inode Size           : 512
> > Disk Space Free      : 3.5TB
> > Total Disk Space     : 3.6TB
> > Inode Count          : 390700096
> > Free Inodes          : 390699629
> >
> ------------------------------------------------------------------------------
> > Brick                : Brick gluster2.linova.de:/glusterfs/sde1enc/brick
> > TCP Port             : 60812
> > RDMA Port            : 0
> > Online               : Y
> > Pid                  : 20628
> > File System          : xfs
> > Device               : /dev/mapper/sde1enc
> > Mount Options        :
> rw,relatime,attr2,inode64,logbufs=8,logbsize=32k,noquota
> > Inode Size           : 512
> > Disk Space Free      : 3.5TB
> > Total Disk Space     : 3.6TB
> > Inode Count          : 390700096
> > Free Inodes          : 390699629
> >
> ------------------------------------------------------------------------------
> > Brick                : Brick gluster3.linova.de:/glusterfs/sde1enc/brick
> > TCP Port             : 59254
> > RDMA Port            : 0
> > Online               : Y
> > Pid                  : 1627522
> > File System          : xfs
> > Device               : /dev/mapper/sde1enc
> > Mount Options        :
> rw,relatime,attr2,inode64,logbufs=8,logbsize=32k,noquota
> > Inode Size           : 512
> > Disk Space Free      : 3.5TB
> > Total Disk Space     : 3.6TB
> > Inode Count          : 390700096
> > Free Inodes          : 390699652
> >
> > root at gluster1:~#
> >
> > root at gluster1:~# gluster v info gfs_vms
> >
> >
> > Volume Name: gfs_vms
> > Type: Distributed-Replicate
> > Volume ID: c70e9806-0463-44ea-818f-a6c824cc5a05
> > Status: Started
> > Snapshot Count: 0
> > Number of Bricks: 3 x 3 = 9
> > Transport-type: tcp
> > Bricks:
> > Brick1: gluster1.linova.de:/glusterfs/sde1enc/brick
> > Brick2: gluster2.linova.de:/glusterfs/sdc1enc/brick
> > Brick3: gluster3.linova.de:/glusterfs/sdc1enc/brick
> > Brick4: gluster1.linova.de:/glusterfs/sdf1enc/brick
> > Brick5: gluster2.linova.de:/glusterfs/sdd1enc/brick
> > Brick6: gluster3.linova.de:/glusterfs/sdd1enc/brick
> > Brick7: gluster1.linova.de:/glusterfs/sdg1enc/brick
> > Brick8: gluster2.linova.de:/glusterfs/sde1enc/brick
> > Brick9: gluster3.linova.de:/glusterfs/sde1enc/brick
> > Options Reconfigured:
> > features.scrub: Active
> > features.bitrot: on
> > cluster.granular-entry-heal: on
> > storage.fips-mode-rchecksum: on
> > transport.address-family: inet
> > nfs.disable: on
> > performance.client-io-threads: off
> >
> > root at gluster1:~#
> >
> > root at gluster1:~# gluster volume heal gms_vms
> > Launching heal operation to perform index self heal on volume gms_vms has
> > been unsuccessful:
> > Volume gms_vms does not exist
> > root at gluster1:~# gluster volume heal gfs_vms
> > Launching heal operation to perform index self heal on volume gfs_vms has
> > been successful
> > Use heal info commands to check status.
> > root at gluster1:~# gluster volume heal gfs_vms info
> > Brick gluster1.linova.de:/glusterfs/sde1enc/brick
> > Status: Connected
> > Number of entries: 0
> >
> > Brick gluster2.linova.de:/glusterfs/sdc1enc/brick
> > Status: Connected
> > Number of entries: 0
> >
> > Brick gluster3.linova.de:/glusterfs/sdc1enc/brick
> > Status: Connected
> > Number of entries: 0
> >
> > Brick gluster1.linova.de:/glusterfs/sdf1enc/brick
> > Status: Connected
> > Number of entries: 0
> >
> > Brick gluster2.linova.de:/glusterfs/sdd1enc/brick
> > Status: Connected
> > Number of entries: 0
> >
> > Brick gluster3.linova.de:/glusterfs/sdd1enc/brick
> > Status: Connected
> > Number of entries: 0
> >
> > Brick gluster1.linova.de:/glusterfs/sdg1enc/brick
> > Status: Connected
> > Number of entries: 0
> >
> > Brick gluster2.linova.de:/glusterfs/sde1enc/brick
> > Status: Connected
> > Number of entries: 0
> >
> > Brick gluster3.linova.de:/glusterfs/sde1enc/brick
> > Status: Connected
> > Number of entries: 0
> >
> > root at gluster1:~#
> >
> > This are the warnings and errors I've found in the logs on our three
> > servers...
> >
> > * Warnings on gluster1.linova.de:
> >
> > glusterd.log:[2023-05-31 23:56:00.032233 +0000] W
> [glusterd-locks.c:545:glusterd_mgmt_v3_lock]
> (-->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0x26edf)
> [0x7f9b8d19eedf]
> -->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0xcdad2)
> [0x7f9b8d245ad2]
> -->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0xcc525)
> [0x7f9b8d244525] ) 0-management: Lock for gfs_vms held by
> a410159b-12db-4cf7-bad5-c5c817679d1b
> > glusterd.log:[2023-06-01 02:22:04.133256 +0000] W
> [glusterd-locks.c:545:glusterd_mgmt_v3_lock]
> (-->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0x26edf)
> [0x7f9b8d19eedf]
> -->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0xcdad2)
> [0x7f9b8d245ad2]
> -->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0xcc525)
> [0x7f9b8d244525] ) 0-management: Lock for gfs_vms held by
> a410159b-12db-4cf7-bad5-c5c817679d1b
> > glusterd.log:[2023-06-01 02:44:00.046086 +0000] W
> [glusterd-locks.c:545:glusterd_mgmt_v3_lock]
> (-->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0x26edf)
> [0x7f9b8d19eedf]
> -->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0xcdad2)
> [0x7f9b8d245ad2]
> -->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0xcc525)
> [0x7f9b8d244525] ) 0-management: Lock for gfs_vms held by
> a410159b-12db-4cf7-bad5-c5c817679d1b
> > glusterd.log:[2023-06-01 05:32:00.042698 +0000] W
> [glusterd-locks.c:545:glusterd_mgmt_v3_lock]
> (-->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0x26edf)
> [0x7f9b8d19eedf]
> -->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0xcdad2)
> [0x7f9b8d245ad2]
> -->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0xcc525)
> [0x7f9b8d244525] ) 0-management: Lock for gfs_vms held by
> a410159b-12db-4cf7-bad5-c5c817679d1b
> > glusterd.log:[2023-06-01 08:18:00.040890 +0000] W
> [glusterd-locks.c:545:glusterd_mgmt_v3_lock]
> (-->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0x26edf)
> [0x7f9b8d19eedf]
> -->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0xcdad2)
> [0x7f9b8d245ad2]
> -->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0xcc525)
> [0x7f9b8d244525] ) 0-management: Lock for gfs_vms held by
> a410159b-12db-4cf7-bad5-c5c817679d1b
> > glusterd.log:[2023-06-01 11:09:00.020843 +0000] W
> [glusterd-locks.c:545:glusterd_mgmt_v3_lock]
> (-->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0x26edf)
> [0x7f9b8d19eedf]
> -->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0xcdad2)
> [0x7f9b8d245ad2]
> -->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0xcc525)
> [0x7f9b8d244525] ) 0-management: Lock for gfs_vms held by
> a410159b-12db-4cf7-bad5-c5c817679d1b
> > glusterd.log:[2023-06-01 13:55:00.319414 +0000] W
> [glusterd-locks.c:545:glusterd_mgmt_v3_lock]
> (-->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0x26edf)
> [0x7f9b8d19eedf]
> -->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0xcdad2)
> [0x7f9b8d245ad2]
> -->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0xcc525)
> [0x7f9b8d244525] ) 0-management: Lock for gfs_vms held by
> a410159b-12db-4cf7-bad5-c5c817679d1b
> >
> > * Errors on gluster1.linova.de:
> >
> > glusterd.log:[2023-05-31 23:56:00.032251 +0000] E [MSGID: 106118]
> [glusterd-syncop.c:1904:gd_sync_task_begin] 0-management: Unable to acquire
> lock for gfs_vms
> > glusterd.log:[2023-06-01 02:22:04.133274 +0000] E [MSGID: 106118]
> [glusterd-syncop.c:1904:gd_sync_task_begin] 0-management: Unable to acquire
> lock for gfs_vms
> > glusterd.log:[2023-06-01 02:44:00.046099 +0000] E [MSGID: 106118]
> [glusterd-syncop.c:1904:gd_sync_task_begin] 0-management: Unable to acquire
> lock for gfs_vms
> > glusterd.log:[2023-06-01 05:32:00.042714 +0000] E [MSGID: 106118]
> [glusterd-syncop.c:1904:gd_sync_task_begin] 0-management: Unable to acquire
> lock for gfs_vms
> > glusterd.log:[2023-06-01 08:18:00.040914 +0000] E [MSGID: 106118]
> [glusterd-syncop.c:1904:gd_sync_task_begin] 0-management: Unable to acquire
> lock for gfs_vms
> > glusterd.log:[2023-06-01 11:09:00.020853 +0000] E [MSGID: 106118]
> [glusterd-syncop.c:1904:gd_sync_task_begin] 0-management: Unable to acquire
> lock for gfs_vms
> > glusterd.log:[2023-06-01 13:21:57.752337 +0000] E [MSGID: 106525]
> [glusterd-op-sm.c:4248:glusterd_dict_set_volid] 0-management: Volume detail
> does not exist
> > glusterd.log:[2023-06-01 13:21:57.752363 +0000] E [MSGID: 106289]
> [glusterd-syncop.c:1947:gd_sync_task_begin] 0-management: Failed to build
> payload for operation 'Volume Status'
> > glusterd.log:[2023-06-01 13:55:00.319432 +0000] E [MSGID: 106118]
> [glusterd-syncop.c:1904:gd_sync_task_begin] 0-management: Unable to acquire
> lock for gfs_vms
> >
> > * Warnings on gluster2.linova.de:
> >
> > [2023-05-31 20:26:37.975658 +0000] W
> [glusterd-locks.c:545:glusterd_mgmt_v3_lock]
> (-->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0x26edf)
> [0x7f4ec1b5bedf]
> -->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0xcdad2)
> [0x7f4ec1c02ad2]
> -->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0xcc525)
> [0x7f4ec1c01525] ) 0-management: Lock for gfs_vms held by
> a410159b-12db-4cf7-bad5-c5c817679d1b
> >
> > * Errors on gluster2.linova.de:
> >
> > [2023-05-31 20:26:37.975831 +0000] E [MSGID: 106118]
> [glusterd-syncop.c:1904:gd_sync_task_begin] 0-management: Unable to acquire
> lock for gfs_vms
> >
> > * Warnings on gluster3.linova.de:
> >
> > [2023-05-31 22:26:44.245188 +0000] W
> [glusterd-locks.c:545:glusterd_mgmt_v3_lock]
> (-->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0x26edf)
> [0x7f5f8ad5bedf]
> -->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0xcdad2)
> [0x7f5f8ae02ad2]
> -->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0xcc525)
> [0x7f5f8ae01525] ) 0-management: Lock for gfs_vms held by
> 4b0a8298-9284-4a24-8de0-f5c25aafb5c7
> > [2023-05-31 22:58:20.000849 +0000] W
> [glusterd-locks.c:545:glusterd_mgmt_v3_lock]
> (-->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0x26edf)
> [0x7f5f8ad5bedf]
> -->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0xcdad2)
> [0x7f5f8ae02ad2]
> -->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0xcc525)
> [0x7f5f8ae01525] ) 0-management: Lock for gfs_vms held by
> 4b0a8298-9284-4a24-8de0-f5c25aafb5c7
> > [2023-06-01 01:26:19.990639 +0000] W
> [glusterd-locks.c:545:glusterd_mgmt_v3_lock]
> (-->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0x26edf)
> [0x7f5f8ad5bedf]
> -->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0xcdad2)
> [0x7f5f8ae02ad2]
> -->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0xcc525)
> [0x7f5f8ae01525] ) 0-management: Lock for gfs_vms held by
> 4b0a8298-9284-4a24-8de0-f5c25aafb5c7
> > [2023-06-01 07:09:44.252654 +0000] W
> [glusterd-locks.c:545:glusterd_mgmt_v3_lock]
> (-->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0x26edf)
> [0x7f5f8ad5bedf]
> -->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0xcdad2)
> [0x7f5f8ae02ad2]
> -->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0xcc525)
> [0x7f5f8ae01525] ) 0-management: Lock for gfs_vms held by
> 4b0a8298-9284-4a24-8de0-f5c25aafb5c7
> > [2023-06-01 07:36:49.803972 +0000] W
> [glusterd-locks.c:545:glusterd_mgmt_v3_lock]
> (-->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0x26edf)
> [0x7f5f8ad5bedf]
> -->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0xcdad2)
> [0x7f5f8ae02ad2]
> -->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0xcc525)
> [0x7f5f8ae01525] ) 0-management: Lock for gfs_vms held by
> 4b0a8298-9284-4a24-8de0-f5c25aafb5c7
> > [2023-06-01 07:42:20.003401 +0000] W
> [glusterd-locks.c:545:glusterd_mgmt_v3_lock]
> (-->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0x26edf)
> [0x7f5f8ad5bedf]
> -->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0xcdad2)
> [0x7f5f8ae02ad2]
> -->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0xcc525)
> [0x7f5f8ae01525] ) 0-management: Lock for gfs_vms held by
> 4b0a8298-9284-4a24-8de0-f5c25aafb5c7
> > [2023-06-01 08:43:55.561333 +0000] W
> [glusterd-locks.c:545:glusterd_mgmt_v3_lock]
> (-->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0x26edf)
> [0x7f5f8ad5bedf]
> -->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0xcdad2)
> [0x7f5f8ae02ad2]
> -->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0xcc525)
> [0x7f5f8ae01525] ) 0-management: Lock for gfs_vms held by
> 7a63d6a0-feae-4349-b787-d0fc76b3db3a
> > [2023-06-01 13:07:04.152591 +0000] W
> [glusterd-locks.c:545:glusterd_mgmt_v3_lock]
> (-->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0x26edf)
> [0x7f5f8ad5bedf]
> -->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0xcdad2)
> [0x7f5f8ae02ad2]
> -->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0xcc525)
> [0x7f5f8ae01525] ) 0-management: Lock for gfs_vms held by
> a410159b-12db-4cf7-bad5-c5c817679d1b
> >
> > * Errors on gluster3.linova.de:
> >
> > [2023-05-31 22:26:44.245214 +0000] E [MSGID: 106118]
> [glusterd-syncop.c:1904:gd_sync_task_begin] 0-management: Unable to acquire
> lock for gfs_vms
> > [2023-05-31 22:58:20.000858 +0000] E [MSGID: 106118]
> [glusterd-syncop.c:1904:gd_sync_task_begin] 0-management: Unable to acquire
> lock for gfs_vms
> > [2023-06-01 01:26:19.990648 +0000] E [MSGID: 106118]
> [glusterd-syncop.c:1904:gd_sync_task_begin] 0-management: Unable to acquire
> lock for gfs_vms
> > [2023-06-01 07:09:44.252671 +0000] E [MSGID: 106118]
> [glusterd-syncop.c:1904:gd_sync_task_begin] 0-management: Unable to acquire
> lock for gfs_vms
> > [2023-06-01 07:36:49.803986 +0000] E [MSGID: 106118]
> [glusterd-syncop.c:1904:gd_sync_task_begin] 0-management: Unable to acquire
> lock for gfs_vms
> > [2023-06-01 07:42:20.003411 +0000] E [MSGID: 106118]
> [glusterd-syncop.c:1904:gd_sync_task_begin] 0-management: Unable to acquire
> lock for gfs_vms
> > [2023-06-01 08:43:55.561349 +0000] E [MSGID: 106118]
> [glusterd-syncop.c:1904:gd_sync_task_begin] 0-management: Unable to acquire
> lock for gfs_vms
> > [2023-06-01 13:07:04.152610 +0000] E [MSGID: 106118]
> [glusterd-syncop.c:1904:gd_sync_task_begin] 0-management: Unable to acquire
> lock for gfs_vms
> >
> > Best regards and thanks again for any helpfull hint!
> >
> >    Chris
> > ________
> >
> >
> >
> > Community Meeting Calendar:
> >
> > Schedule -
> > Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
> > Bridge: https://meet.google.com/cpu-eiue-hvk
> > Gluster-users mailing list
> > Gluster-users at gluster.org
> > https://lists.gluster.org/mailman/listinfo/gluster-users
> ________
>
>
>
> Community Meeting Calendar:
>
> Schedule -
> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
> Bridge: https://meet.google.com/cpu-eiue-hvk
> Gluster-users mailing list
> Gluster-users at gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users
>

-- 


Ce message et toutes les pièces jointes (ci-après le “message”) sont 
établis à l’intention exclusive de ses destinataires et sont confidentiels. 
Si vous recevez ce message par erreur, merci de le détruire et d’en avertir 
immédiatement l’expéditeur. Toute utilisation de ce message non conforme a 
sa destination, toute diffusion ou toute publication, totale ou partielle, 
est interdite, sauf autorisation expresse. L’internet ne permettant pas 
d’assurer l’intégrité de ce message . Interactiv-group (et ses filiales) 
décline(nt) toute responsabilité au titre de ce message, dans l’hypothèse 
ou il aurait été modifié. IT, ES, UK.  
<https://interactiv-group.com/disclaimer.html>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20230602/83118ac3/attachment.html>


More information about the Gluster-users mailing list