[Gluster-users] Using glusterfs for virtual machines with qcow2 images

Gilberto Ferreira gilberto.nunes32 at gmail.com
Wed Jun 7 12:48:19 UTC 2023


Hi everybody

Regarding the issue with mount, usually I am using this systemd service to
bring up the mount points:
/etc/systemd/system/glusterfsmounts.service
[Unit]
Description=Glustermounting
Requires=glusterd.service
Wants=glusterd.service
After=network.target network-online.target glusterd.service

[Service]
Type=simple
RemainAfterExit=true
ExecStartPre=/usr/sbin/gluster volume list
ExecStart=/bin/mount -a -t glusterfs
TimeoutSec=600
SuccessExitStatus=15
Restart=on-failure
RestartSec=60
StartLimitBurst=6
StartLimitInterval=3600

[Install]
WantedBy=multi-user.target

After create it remember to reload the systemd daemon like:
systemctl enable glusterfsmounts.service
systemctl demon-reload

Also, I am using /etc/fstab to mount the glusterfs mount point properly,
since the Proxmox GUI seems to me a little broken in this regards
gluster1:VMS1 /vms1 glusterfs
defaults,_netdev,x-systemd.automount,backupvolfile-server=gluster2 0 0

---
Gilberto Nunes Ferreira
(47) 99676-7530 - Whatsapp / Telegram






Em qua., 7 de jun. de 2023 às 01:51, Strahil Nikolov <hunter86_bg at yahoo.com>
escreveu:

> Hi Chris,
>
> here is a link to the settings needed for VM storage:
> https://github.com/gluster/glusterfs/blob/03592930239c3b43cbbdce17607c099ae075fd6d/extras/group-virt.example#L4
>
> You can also ask in ovirt-users for real-world settings.Test well before
> changing production!!!
>
> IMPORTANT: ONCE SHARDING IS ENABLED, IT CANNOT BE DISABLED !!!
>
> Best Regards,
> Strahil Nikolov
>
> On Mon, Jun 5, 2023 at 13:55, Christian Schoepplein
> <christian.schoepplein at linova.de> wrote:
> Hi,
>
> we'd like to use glusterfs for Proxmox and virtual machines with qcow2
> disk images. We have a three node glusterfs setup with one volume and
> Proxmox is attached and VMs are created, but after some time, and I think
> after much i/o is going on for a VM, the data inside the virtual machine
> gets corrupted. When I copy files from or to our glusterfs
> directly everything is OK, I've checked the files with md5sum. So in
> general
> our glusterfs setup seems to be OK I think..., but with the VMs and the
> self
> growing qcow2 images there are problems. If I use raw images for the VMs
> tests look better, but I need to do more testing to be sure, the problem
> is
> a bit hard to reproduce :-(.
>
> I've also asked on a Proxmox mailinglist, but got no helpfull response so
> far :-(. So maybe you have any helping hint what might be wrong with our
> setup, what needs to be configured to use glusterfs as a storage backend
> for
> virtual machines with self growing disk images. e.g. Any helpfull tip
> would
> be great, because I am absolutely no glusterfs expert and also not a
> expert
> for virtualization and what has to be done to let all components play well
> together... Thanks for your support!
>
> Here some infos about our glusterfs setup, please let me know if you need
> more infos. We are using Ubuntu 22.04 as operating system:
>
> root at gluster1:~# gluster --version
> glusterfs 10.1
> Repository revision: git://git.gluster.org/glusterfs.git
> Copyright (c) 2006-2016 Red Hat, Inc. <https://www.gluster.org/>
> GlusterFS comes with ABSOLUTELY NO WARRANTY.
> It is licensed to you under your choice of the GNU Lesser
> General Public License, version 3 or any later version (LGPLv3
> or later), or the GNU General Public License, version 2 (GPLv2),
> in all cases as published by the Free Software Foundation.
> root at gluster1:~#
>
> root at gluster1:~# gluster v status gfs_vms
>
> Status of volume: gfs_vms
> Gluster process                            TCP Port  RDMA Port  Online  Pid
>
> ------------------------------------------------------------------------------
> Brick gluster1.linova.de:/glusterfs/sde1enc
> /brick                                      58448    0          Y
> 1062218
> Brick gluster2.linova.de:/glusterfs/sdc1enc
> /brick                                      50254    0          Y
> 20596
> Brick gluster3.linova.de:/glusterfs/sdc1enc
> /brick                                      52840    0          Y
> 1627513
> Brick gluster1.linova.de:/glusterfs/sdf1enc
> /brick                                      49832    0          Y
> 1062227
> Brick gluster2.linova.de:/glusterfs/sdd1enc
> /brick                                      56095    0          Y
> 20612
> Brick gluster3.linova.de:/glusterfs/sdd1enc
> /brick                                      51252    0          Y
> 1627521
> Brick gluster1.linova.de:/glusterfs/sdg1enc
> /brick                                      54991    0          Y
> 1062230
> Brick gluster2.linova.de:/glusterfs/sde1enc
> /brick                                      60812    0          Y
> 20628
> Brick gluster3.linova.de:/glusterfs/sde1enc
> /brick                                      59254    0          Y
> 1627522
> Self-heal Daemon on localhost              N/A      N/A        Y
> 1062249
> Bitrot Daemon on localhost                  N/A      N/A        Y
> 3591335
> Scrubber Daemon on localhost                N/A      N/A        Y
> 3591346
> Self-heal Daemon on gluster2.linova.de      N/A      N/A        Y
> 20645
> Bitrot Daemon on gluster2.linova.de        N/A      N/A        Y
> 987517
> Scrubber Daemon on gluster2.linova.de      N/A      N/A        Y
> 987588
> Self-heal Daemon on gluster3.linova.de      N/A      N/A        Y
> 1627568
> Bitrot Daemon on gluster3.linova.de        N/A      N/A        Y
> 1627543
> Scrubber Daemon on gluster3.linova.de      N/A      N/A        Y
> 1627554
>
> Task Status of Volume gfs_vms
>
> ------------------------------------------------------------------------------
> There are no active volume tasks
>
> root at gluster1:~#
>
> root at gluster1:~# gluster v status gfs_vms detail
>
> Status of volume: gfs_vms
>
> ------------------------------------------------------------------------------
> Brick                : Brick gluster1.linova.de:/glusterfs/sde1enc/brick
> TCP Port            : 58448
> RDMA Port            : 0
> Online              : Y
> Pid                  : 1062218
> File System          : xfs
> Device              : /dev/mapper/sde1enc
> Mount Options        :
> rw,relatime,attr2,inode64,logbufs=8,logbsize=32k,noquota
> Inode Size          : 512
> Disk Space Free      : 3.6TB
> Total Disk Space    : 3.6TB
> Inode Count          : 390700096
> Free Inodes          : 390699660
>
> ------------------------------------------------------------------------------
> Brick                : Brick gluster2.linova.de:/glusterfs/sdc1enc/brick
> TCP Port            : 50254
> RDMA Port            : 0
> Online              : Y
> Pid                  : 20596
> File System          : xfs
> Device              : /dev/mapper/sdc1enc
> Mount Options        :
> rw,relatime,attr2,inode64,logbufs=8,logbsize=32k,noquota
> Inode Size          : 512
> Disk Space Free      : 3.6TB
> Total Disk Space    : 3.6TB
> Inode Count          : 390700096
> Free Inodes          : 390699660
>
> ------------------------------------------------------------------------------
> Brick                : Brick gluster3.linova.de:/glusterfs/sdc1enc/brick
> TCP Port            : 52840
> RDMA Port            : 0
> Online              : Y
> Pid                  : 1627513
> File System          : xfs
> Device              : /dev/mapper/sdc1enc
> Mount Options        :
> rw,relatime,attr2,inode64,logbufs=8,logbsize=32k,noquota
> Inode Size          : 512
> Disk Space Free      : 3.6TB
> Total Disk Space    : 3.6TB
> Inode Count          : 390700096
> Free Inodes          : 390699673
>
> ------------------------------------------------------------------------------
> Brick                : Brick gluster1.linova.de:/glusterfs/sdf1enc/brick
> TCP Port            : 49832
> RDMA Port            : 0
> Online              : Y
> Pid                  : 1062227
> File System          : xfs
> Device              : /dev/mapper/sdf1enc
> Mount Options        :
> rw,relatime,attr2,inode64,logbufs=8,logbsize=32k,noquota
> Inode Size          : 512
> Disk Space Free      : 3.4TB
> Total Disk Space    : 3.6TB
> Inode Count          : 390700096
> Free Inodes          : 390699632
>
> ------------------------------------------------------------------------------
> Brick                : Brick gluster2.linova.de:/glusterfs/sdd1enc/brick
> TCP Port            : 56095
> RDMA Port            : 0
> Online              : Y
> Pid                  : 20612
> File System          : xfs
> Device              : /dev/mapper/sdd1enc
> Mount Options        :
> rw,relatime,attr2,inode64,logbufs=8,logbsize=32k,noquota
> Inode Size          : 512
> Disk Space Free      : 3.4TB
> Total Disk Space    : 3.6TB
> Inode Count          : 390700096
> Free Inodes          : 390699632
>
> ------------------------------------------------------------------------------
> Brick                : Brick gluster3.linova.de:/glusterfs/sdd1enc/brick
> TCP Port            : 51252
> RDMA Port            : 0
> Online              : Y
> Pid                  : 1627521
> File System          : xfs
> Device              : /dev/mapper/sdd1enc
> Mount Options        :
> rw,relatime,attr2,inode64,logbufs=8,logbsize=32k,noquota
> Inode Size          : 512
> Disk Space Free      : 3.4TB
> Total Disk Space    : 3.6TB
> Inode Count          : 390700096
> Free Inodes          : 390699658
>
> ------------------------------------------------------------------------------
> Brick                : Brick gluster1.linova.de:/glusterfs/sdg1enc/brick
> TCP Port            : 54991
> RDMA Port            : 0
> Online              : Y
> Pid                  : 1062230
> File System          : xfs
> Device              : /dev/mapper/sdg1enc
> Mount Options        :
> rw,relatime,attr2,inode64,logbufs=8,logbsize=32k,noquota
> Inode Size          : 512
> Disk Space Free      : 3.5TB
> Total Disk Space    : 3.6TB
> Inode Count          : 390700096
> Free Inodes          : 390699629
>
> ------------------------------------------------------------------------------
> Brick                : Brick gluster2.linova.de:/glusterfs/sde1enc/brick
> TCP Port            : 60812
> RDMA Port            : 0
> Online              : Y
> Pid                  : 20628
> File System          : xfs
> Device              : /dev/mapper/sde1enc
> Mount Options        :
> rw,relatime,attr2,inode64,logbufs=8,logbsize=32k,noquota
> Inode Size          : 512
> Disk Space Free      : 3.5TB
> Total Disk Space    : 3.6TB
> Inode Count          : 390700096
> Free Inodes          : 390699629
>
> ------------------------------------------------------------------------------
> Brick                : Brick gluster3.linova.de:/glusterfs/sde1enc/brick
> TCP Port            : 59254
> RDMA Port            : 0
> Online              : Y
> Pid                  : 1627522
> File System          : xfs
> Device              : /dev/mapper/sde1enc
> Mount Options        :
> rw,relatime,attr2,inode64,logbufs=8,logbsize=32k,noquota
> Inode Size          : 512
> Disk Space Free      : 3.5TB
> Total Disk Space    : 3.6TB
> Inode Count          : 390700096
> Free Inodes          : 390699652
>
> root at gluster1:~#
>
> root at gluster1:~# gluster v info gfs_vms
>
>
> Volume Name: gfs_vms
> Type: Distributed-Replicate
> Volume ID: c70e9806-0463-44ea-818f-a6c824cc5a05
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 3 x 3 = 9
> Transport-type: tcp
> Bricks:
> Brick1: gluster1.linova.de:/glusterfs/sde1enc/brick
> Brick2: gluster2.linova.de:/glusterfs/sdc1enc/brick
> Brick3: gluster3.linova.de:/glusterfs/sdc1enc/brick
> Brick4: gluster1.linova.de:/glusterfs/sdf1enc/brick
> Brick5: gluster2.linova.de:/glusterfs/sdd1enc/brick
> Brick6: gluster3.linova.de:/glusterfs/sdd1enc/brick
> Brick7: gluster1.linova.de:/glusterfs/sdg1enc/brick
> Brick8: gluster2.linova.de:/glusterfs/sde1enc/brick
> Brick9: gluster3.linova.de:/glusterfs/sde1enc/brick
> Options Reconfigured:
> features.scrub: Active
> features.bitrot: on
> cluster.granular-entry-heal: on
> storage.fips-mode-rchecksum: on
> transport.address-family: inet
> nfs.disable: on
> performance.client-io-threads: off
>
> root at gluster1:~#
>
> root at gluster1:~# gluster volume heal gms_vms
> Launching heal operation to perform index self heal on volume gms_vms has
> been unsuccessful:
> Volume gms_vms does not exist
> root at gluster1:~# gluster volume heal gfs_vms
> Launching heal operation to perform index self heal on volume gfs_vms has
> been successful
> Use heal info commands to check status.
> root at gluster1:~# gluster volume heal gfs_vms info
> Brick gluster1.linova.de:/glusterfs/sde1enc/brick
> Status: Connected
> Number of entries: 0
>
> Brick gluster2.linova.de:/glusterfs/sdc1enc/brick
> Status: Connected
> Number of entries: 0
>
> Brick gluster3.linova.de:/glusterfs/sdc1enc/brick
> Status: Connected
> Number of entries: 0
>
> Brick gluster1.linova.de:/glusterfs/sdf1enc/brick
> Status: Connected
> Number of entries: 0
>
> Brick gluster2.linova.de:/glusterfs/sdd1enc/brick
> Status: Connected
> Number of entries: 0
>
> Brick gluster3.linova.de:/glusterfs/sdd1enc/brick
> Status: Connected
> Number of entries: 0
>
> Brick gluster1.linova.de:/glusterfs/sdg1enc/brick
> Status: Connected
> Number of entries: 0
>
> Brick gluster2.linova.de:/glusterfs/sde1enc/brick
> Status: Connected
> Number of entries: 0
>
> Brick gluster3.linova.de:/glusterfs/sde1enc/brick
> Status: Connected
> Number of entries: 0
>
> root at gluster1:~#
>
> This are the warnings and errors I've found in the logs on our three
> servers...
>
> * Warnings on gluster1.linova.de:
>
> glusterd.log:[2023-05-31 23:56:00.032233 +0000] W
> [glusterd-locks.c:545:glusterd_mgmt_v3_lock]
> (-->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0x26edf)
> [0x7f9b8d19eedf]
> -->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0xcdad2)
> [0x7f9b8d245ad2]
> -->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0xcc525)
> [0x7f9b8d244525] ) 0-management: Lock for gfs_vms held by
> a410159b-12db-4cf7-bad5-c5c817679d1b
> glusterd.log:[2023-06-01 02:22:04.133256 +0000] W
> [glusterd-locks.c:545:glusterd_mgmt_v3_lock]
> (-->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0x26edf)
> [0x7f9b8d19eedf]
> -->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0xcdad2)
> [0x7f9b8d245ad2]
> -->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0xcc525)
> [0x7f9b8d244525] ) 0-management: Lock for gfs_vms held by
> a410159b-12db-4cf7-bad5-c5c817679d1b
> glusterd.log:[2023-06-01 02:44:00.046086 +0000] W
> [glusterd-locks.c:545:glusterd_mgmt_v3_lock]
> (-->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0x26edf)
> [0x7f9b8d19eedf]
> -->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0xcdad2)
> [0x7f9b8d245ad2]
> -->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0xcc525)
> [0x7f9b8d244525] ) 0-management: Lock for gfs_vms held by
> a410159b-12db-4cf7-bad5-c5c817679d1b
> glusterd.log:[2023-06-01 05:32:00.042698 +0000] W
> [glusterd-locks.c:545:glusterd_mgmt_v3_lock]
> (-->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0x26edf)
> [0x7f9b8d19eedf]
> -->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0xcdad2)
> [0x7f9b8d245ad2]
> -->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0xcc525)
> [0x7f9b8d244525] ) 0-management: Lock for gfs_vms held by
> a410159b-12db-4cf7-bad5-c5c817679d1b
> glusterd.log:[2023-06-01 08:18:00.040890 +0000] W
> [glusterd-locks.c:545:glusterd_mgmt_v3_lock]
> (-->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0x26edf)
> [0x7f9b8d19eedf]
> -->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0xcdad2)
> [0x7f9b8d245ad2]
> -->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0xcc525)
> [0x7f9b8d244525] ) 0-management: Lock for gfs_vms held by
> a410159b-12db-4cf7-bad5-c5c817679d1b
> glusterd.log:[2023-06-01 11:09:00.020843 +0000] W
> [glusterd-locks.c:545:glusterd_mgmt_v3_lock]
> (-->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0x26edf)
> [0x7f9b8d19eedf]
> -->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0xcdad2)
> [0x7f9b8d245ad2]
> -->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0xcc525)
> [0x7f9b8d244525] ) 0-management: Lock for gfs_vms held by
> a410159b-12db-4cf7-bad5-c5c817679d1b
> glusterd.log:[2023-06-01 13:55:00.319414 +0000] W
> [glusterd-locks.c:545:glusterd_mgmt_v3_lock]
> (-->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0x26edf)
> [0x7f9b8d19eedf]
> -->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0xcdad2)
> [0x7f9b8d245ad2]
> -->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0xcc525)
> [0x7f9b8d244525] ) 0-management: Lock for gfs_vms held by
> a410159b-12db-4cf7-bad5-c5c817679d1b
>
> * Errors on gluster1.linova.de:
>
> glusterd.log:[2023-05-31 23:56:00.032251 +0000] E [MSGID: 106118]
> [glusterd-syncop.c:1904:gd_sync_task_begin] 0-management: Unable to acquire
> lock for gfs_vms
> glusterd.log:[2023-06-01 02:22:04.133274 +0000] E [MSGID: 106118]
> [glusterd-syncop.c:1904:gd_sync_task_begin] 0-management: Unable to acquire
> lock for gfs_vms
> glusterd.log:[2023-06-01 02:44:00.046099 +0000] E [MSGID: 106118]
> [glusterd-syncop.c:1904:gd_sync_task_begin] 0-management: Unable to acquire
> lock for gfs_vms
> glusterd.log:[2023-06-01 05:32:00.042714 +0000] E [MSGID: 106118]
> [glusterd-syncop.c:1904:gd_sync_task_begin] 0-management: Unable to acquire
> lock for gfs_vms
> glusterd.log:[2023-06-01 08:18:00.040914 +0000] E [MSGID: 106118]
> [glusterd-syncop.c:1904:gd_sync_task_begin] 0-management: Unable to acquire
> lock for gfs_vms
> glusterd.log:[2023-06-01 11:09:00.020853 +0000] E [MSGID: 106118]
> [glusterd-syncop.c:1904:gd_sync_task_begin] 0-management: Unable to acquire
> lock for gfs_vms
> glusterd.log:[2023-06-01 13:21:57.752337 +0000] E [MSGID: 106525]
> [glusterd-op-sm.c:4248:glusterd_dict_set_volid] 0-management: Volume detail
> does not exist
> glusterd.log:[2023-06-01 13:21:57.752363 +0000] E [MSGID: 106289]
> [glusterd-syncop.c:1947:gd_sync_task_begin] 0-management: Failed to build
> payload for operation 'Volume Status'
> glusterd.log:[2023-06-01 13:55:00.319432 +0000] E [MSGID: 106118]
> [glusterd-syncop.c:1904:gd_sync_task_begin] 0-management: Unable to acquire
> lock for gfs_vms
>
> * Warnings on gluster2.linova.de:
>
> [2023-05-31 20:26:37.975658 +0000] W
> [glusterd-locks.c:545:glusterd_mgmt_v3_lock]
> (-->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0x26edf)
> [0x7f4ec1b5bedf]
> -->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0xcdad2)
> [0x7f4ec1c02ad2]
> -->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0xcc525)
> [0x7f4ec1c01525] ) 0-management: Lock for gfs_vms held by
> a410159b-12db-4cf7-bad5-c5c817679d1b
>
> * Errors on gluster2.linova.de:
>
> [2023-05-31 20:26:37.975831 +0000] E [MSGID: 106118]
> [glusterd-syncop.c:1904:gd_sync_task_begin] 0-management: Unable to acquire
> lock for gfs_vms
>
> * Warnings on gluster3.linova.de:
>
> [2023-05-31 22:26:44.245188 +0000] W
> [glusterd-locks.c:545:glusterd_mgmt_v3_lock]
> (-->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0x26edf)
> [0x7f5f8ad5bedf]
> -->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0xcdad2)
> [0x7f5f8ae02ad2]
> -->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0xcc525)
> [0x7f5f8ae01525] ) 0-management: Lock for gfs_vms held by
> 4b0a8298-9284-4a24-8de0-f5c25aafb5c7
> [2023-05-31 22:58:20.000849 +0000] W
> [glusterd-locks.c:545:glusterd_mgmt_v3_lock]
> (-->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0x26edf)
> [0x7f5f8ad5bedf]
> -->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0xcdad2)
> [0x7f5f8ae02ad2]
> -->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0xcc525)
> [0x7f5f8ae01525] ) 0-management: Lock for gfs_vms held by
> 4b0a8298-9284-4a24-8de0-f5c25aafb5c7
> [2023-06-01 01:26:19.990639 +0000] W
> [glusterd-locks.c:545:glusterd_mgmt_v3_lock]
> (-->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0x26edf)
> [0x7f5f8ad5bedf]
> -->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0xcdad2)
> [0x7f5f8ae02ad2]
> -->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0xcc525)
> [0x7f5f8ae01525] ) 0-management: Lock for gfs_vms held by
> 4b0a8298-9284-4a24-8de0-f5c25aafb5c7
> [2023-06-01 07:09:44.252654 +0000] W
> [glusterd-locks.c:545:glusterd_mgmt_v3_lock]
> (-->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0x26edf)
> [0x7f5f8ad5bedf]
> -->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0xcdad2)
> [0x7f5f8ae02ad2]
> -->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0xcc525)
> [0x7f5f8ae01525] ) 0-management: Lock for gfs_vms held by
> 4b0a8298-9284-4a24-8de0-f5c25aafb5c7
> [2023-06-01 07:36:49.803972 +0000] W
> [glusterd-locks.c:545:glusterd_mgmt_v3_lock]
> (-->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0x26edf)
> [0x7f5f8ad5bedf]
> -->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0xcdad2)
> [0x7f5f8ae02ad2]
> -->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0xcc525)
> [0x7f5f8ae01525] ) 0-management: Lock for gfs_vms held by
> 4b0a8298-9284-4a24-8de0-f5c25aafb5c7
> [2023-06-01 07:42:20.003401 +0000] W
> [glusterd-locks.c:545:glusterd_mgmt_v3_lock]
> (-->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0x26edf)
> [0x7f5f8ad5bedf]
> -->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0xcdad2)
> [0x7f5f8ae02ad2]
> -->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0xcc525)
> [0x7f5f8ae01525] ) 0-management: Lock for gfs_vms held by
> 4b0a8298-9284-4a24-8de0-f5c25aafb5c7
> [2023-06-01 08:43:55.561333 +0000] W
> [glusterd-locks.c:545:glusterd_mgmt_v3_lock]
> (-->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0x26edf)
> [0x7f5f8ad5bedf]
> -->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0xcdad2)
> [0x7f5f8ae02ad2]
> -->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0xcc525)
> [0x7f5f8ae01525] ) 0-management: Lock for gfs_vms held by
> 7a63d6a0-feae-4349-b787-d0fc76b3db3a
> [2023-06-01 13:07:04.152591 +0000] W
> [glusterd-locks.c:545:glusterd_mgmt_v3_lock]
> (-->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0x26edf)
> [0x7f5f8ad5bedf]
> -->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0xcdad2)
> [0x7f5f8ae02ad2]
> -->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0xcc525)
> [0x7f5f8ae01525] ) 0-management: Lock for gfs_vms held by
> a410159b-12db-4cf7-bad5-c5c817679d1b
>
> * Errors on gluster3.linova.de:
>
> [2023-05-31 22:26:44.245214 +0000] E [MSGID: 106118]
> [glusterd-syncop.c:1904:gd_sync_task_begin] 0-management: Unable to acquire
> lock for gfs_vms
> [2023-05-31 22:58:20.000858 +0000] E [MSGID: 106118]
> [glusterd-syncop.c:1904:gd_sync_task_begin] 0-management: Unable to acquire
> lock for gfs_vms
> [2023-06-01 01:26:19.990648 +0000] E [MSGID: 106118]
> [glusterd-syncop.c:1904:gd_sync_task_begin] 0-management: Unable to acquire
> lock for gfs_vms
> [2023-06-01 07:09:44.252671 +0000] E [MSGID: 106118]
> [glusterd-syncop.c:1904:gd_sync_task_begin] 0-management: Unable to acquire
> lock for gfs_vms
> [2023-06-01 07:36:49.803986 +0000] E [MSGID: 106118]
> [glusterd-syncop.c:1904:gd_sync_task_begin] 0-management: Unable to acquire
> lock for gfs_vms
> [2023-06-01 07:42:20.003411 +0000] E [MSGID: 106118]
> [glusterd-syncop.c:1904:gd_sync_task_begin] 0-management: Unable to acquire
> lock for gfs_vms
> [2023-06-01 08:43:55.561349 +0000] E [MSGID: 106118]
> [glusterd-syncop.c:1904:gd_sync_task_begin] 0-management: Unable to acquire
> lock for gfs_vms
> [2023-06-01 13:07:04.152610 +0000] E [MSGID: 106118]
> [glusterd-syncop.c:1904:gd_sync_task_begin] 0-management: Unable to acquire
> lock for gfs_vms
>
> Best regards and thanks again for any helpfull hint!
>
>   Chris
> ________
>
>
>
> Community Meeting Calendar:
>
> Schedule -
> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
> Bridge: https://meet.google.com/cpu-eiue-hvk
> Gluster-users mailing list
> Gluster-users at gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users
>
> ________
>
>
>
> Community Meeting Calendar:
>
> Schedule -
> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
> Bridge: https://meet.google.com/cpu-eiue-hvk
> Gluster-users mailing list
> Gluster-users at gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20230607/c8705943/attachment.html>


More information about the Gluster-users mailing list