[Gluster-users] [EXT] [Glusterusers] Using glusterfs for virtual machines with qco

Gilberto Ferreira gilberto.nunes32 at gmail.com
Mon Jun 5 15:54:17 UTC 2023


Hi there.
I don't know if you are using 2node glusterfs solution, but here is my way
in this scenario and it's work awesome for me:
(VMS1 is the gluster volume, as you can see)

gluster vol heal VMS1 enable
gluster vol set VMS1 network.ping-timeout 2
gluster vol set VMS1 performance.quick-read off
gluster vol set VMS1 performance.read-ahead off
gluster vol set VMS1 performance.io-cache off
gluster vol set VMS1 performance.low-prio-threads 32
gluster vol set VMS1 performance.write-behind off
gluster vol set VMS1 performance.flush-behind off
gluster vol set VMS1 network.remote-dio disable
gluster vol set VMS1 performance.strict-o-direct on
gluster vol set VMS1 cluster.quorum-type fixed
gluster vol set VMS1 cluster.server-quorum-type none
gluster vol set VMS1 cluster.locking-scheme granular
gluster vol set VMS1 cluster.shd-max-threads 8
gluster vol set VMS1 cluster.shd-wait-qlength 10000
gluster vol set VMS1 cluster.data-self-heal-algorithm full
gluster vol set VMS1 cluster.favorite-child-policy mtime
gluster vol set VMS1 cluster.quorum-count 1
gluster vol set VMS1 cluster.quorum-reads false
gluster vol set VMS1 cluster.self-heal-daemon enable
gluster vol set VMS1 cluster.heal-timeout 5
gluster vol heal VMS1 granular-entry-heal enable
gluster vol set VMS1 features.shard on
gluster vol set VMS1 user.cifs off
gluster vol set VMS1 cluster.choose-local off
gluster vol set VMS1 client.event-threads 4
gluster vol set VMS1 server.event-threads 4
gluster vol set VMS1 performance.client-io-threads on
gluster vol set VMS1 network.ping-timeout 20
gluster vol set VMS1 server.tcp-user-timeout 20
gluster vol set VMS1 server.keepalive-time 10
gluster vol set VMS1 server.keepalive-interval 2
gluster vol set VMS1 server.keepalive-count 5
gluster vol set VMS1 cluster.lookup-optimize off

I have had created the replica 2 like this:
gluster vol create VMS1 replica 2 gluster1:/mnt/pve/dataglusterfs/vms/
gluster2:/mnt/pve/dataglusterfs/vms/
And to avoid split-brain I have had enabled thoses options above.
That I have had created the a folder like:
mkdir /vms1
After that I have had edit /etc/fstab like
in the first node:
gluster1:VMS1 /vms1 glusterfs
defaults,_netdev,x-systemd.automount,backupvolfile-server=gluster2 0 0
in the second node:
gluster2:VMS1 /vms1 glusterfs
defaults,_netdev,x-systemd.automount,backupvolfile-server=gluster1 0 0
gluster1 and gluster2 it's a dedicated 10g nic and included in the
/etc/hosts like
172.16.20.10 gluster1
172.16.20.20 gluster2

Than in both nodes I do
mount /vms1
Now everything is ok.
As I am using Proxmox VE here, I just create a storage entry in the Proxmox
/etc/pve/storage.cfg file like:
dir: STG-VMS-1
        path /vms1
        content rootdir,images
        preallocation metadata
        prune-backups keep-all=1
        shared 1

And I am ready to fly!

Hope this can help you in any way!

Cheers




---
Gilberto Nunes Ferreira
(47) 99676-7530 - Whatsapp / Telegram






Em seg., 5 de jun. de 2023 às 12:20, Christian Schoepplein <
christian.schoepplein at linova.de> escreveu:

> Hi Gilberto, hi all,
>
> thanks a lot for all your answers.
>
> At first I changed both settings mentioned below and first test look good.
>
> Before changing the settings I was able to crash a new installed VM every
> time after a fresh installation by producing much i/o, e.g. when
> installing
> Libre Office. This always resulted in corrupt files inside the VM, but
> researching the qcow2 file with the qemu-img tool showed no errors for the
> file.
>
> I'll do further testing and will run more VMs on the volume during the
> next
> days, lets see how things go on and if further tweaking of the volume is
> necessary.
>
> Cheers,
>
>   Chris
>
>
> On Fri, Jun 02, 2023 at 09:05:28AM -0300, Gilberto Ferreira wrote:
> >Try turn off this options:
> >performance.write-behind
> >performance.flush-behind
> >
> >---
> >Gilberto Nunes Ferreira
> >(47) 99676-7530 - Whatsapp / Telegram
> >
> >
> >
> >
> >
> >
> >Em sex., 2 de jun. de 2023 às 07:55, Guillaume Pavese <
> >guillaume.pavese at interactiv-group.com> escreveu:
> >
> >    On oVirt / Redhat Virtualization,
> >    the following Gluster volumes settings are recommended to be applied
> >    (preferably at the creation of the volume)
> >    These settings are important for data reliability, ( Note that
> Replica 3 or
> >    Replica 2+1 is expected )
> >
> >    performance.quick-read=off
> >    performance.read-ahead=off
> >    performance.io-cache=off
> >    performance.low-prio-threads=32
> >    network.remote-dio=enable
> >    cluster.eager-lock=enable
> >    cluster.quorum-type=auto
> >    cluster.server-quorum-type=server
> >    cluster.data-self-heal-algorithm=full
> >    cluster.locking-scheme=granular
> >    cluster.shd-max-threads=8
> >    cluster.shd-wait-qlength=10000
> >    features.shard=on
> >    user.cifs=off
> >    cluster.choose-local=off
> >    client.event-threads=4
> >    server.event-threads=4
> >    performance.client-io-threads=on
> >
> >
> >
> >
> >    Guillaume Pavese
> >    Ingénieur Système et Réseau
> >    Interactiv-Group
> >
> >
> >    On Fri, Jun 2, 2023 at 5:33 AM W Kern <wkmail at bneit.com> wrote:
> >
> >        We use qcow2 with libvirt based kvm on many small clusters and
> have
> >        found it to be exremely reliable though maybe not the fastest,
> though
> >        some of that is most of our storage is SATA SSDs in a software
> RAID1
> >        config for each brick.
> >
> >        What problems are you running into?
> >
> >        You just mention 'problems'
> >
> >        -wk
> >
> >        On 6/1/23 8:42 AM, Christian Schoepplein wrote:
> >        > Hi,
> >        >
> >        > we'd like to use glusterfs for Proxmox and virtual machines with
> >        qcow2
> >        > disk images. We have a three node glusterfs setup with one
> volume and
> >        > Proxmox is attached and VMs are created, but after some time,
> and I
> >        think
> >        > after much i/o is going on for a VM, the data inside the virtual
> >        machine
> >        > gets corrupted. When I copy files from or to our glusterfs
> >        > directly everything is OK, I've checked the files with md5sum.
> So in
> >        general
> >        > our glusterfs setup seems to be OK I think..., but with the VMs
> and
> >        the self
> >        > growing qcow2 images there are problems. If I use raw images
> for the
> >        VMs
> >        > tests look better, but I need to do more testing to be sure, the
> >        problem is
> >        > a bit hard to reproduce :-(.
> >        >
> >        > I've also asked on a Proxmox mailinglist, but got no helpfull
> >        response so
> >        > far :-(. So maybe you have any helping hint what might be wrong
> with
> >        our
> >        > setup, what needs to be configured to use glusterfs as a storage
> >        backend for
> >        > virtual machines with self growing disk images. e.g. Any
> helpfull tip
> >        would
> >        > be great, because I am absolutely no glusterfs expert and also
> not a
> >        expert
> >        > for virtualization and what has to be done to let all
> components play
> >        well
> >        > together... Thanks for your support!
> >        >
> >        > Here some infos about our glusterfs setup, please let me know
> if you
> >        need
> >        > more infos. We are using Ubuntu 22.04 as operating system:
> >        >
> >        > root at gluster1:~# gluster --version
> >        > glusterfs 10.1
> >        > Repository revision: git://git.gluster.org/glusterfs.git
> >        > Copyright (c) 2006-2016 Red Hat, Inc. <https://www.gluster.org/
> >
> >        > GlusterFS comes with ABSOLUTELY NO WARRANTY.
> >        > It is licensed to you under your choice of the GNU Lesser
> >        > General Public License, version 3 or any later version (LGPLv3
> >        > or later), or the GNU General Public License, version 2 (GPLv2),
> >        > in all cases as published by the Free Software Foundation.
> >        > root at gluster1:~#
> >        >
> >        > root at gluster1:~# gluster v status gfs_vms
> >        >
> >        > Status of volume: gfs_vms
> >        > Gluster process                             TCP Port  RDMA Port
> >        Online  Pid
> >        >
> >
> ------------------------------------------------------------------------------
> >        > Brick gluster1.linova.de:/glusterfs/sde1enc
> >        > /brick                                      58448     0
>   Y
> >           1062218
> >        > Brick gluster2.linova.de:/glusterfs/sdc1enc
> >        > /brick                                      50254     0
>   Y
> >           20596
> >        > Brick gluster3.linova.de:/glusterfs/sdc1enc
> >        > /brick                                      52840     0
>   Y
> >           1627513
> >        > Brick gluster1.linova.de:/glusterfs/sdf1enc
> >        > /brick                                      49832     0
>   Y
> >           1062227
> >        > Brick gluster2.linova.de:/glusterfs/sdd1enc
> >        > /brick                                      56095     0
>   Y
> >           20612
> >        > Brick gluster3.linova.de:/glusterfs/sdd1enc
> >        > /brick                                      51252     0
>   Y
> >           1627521
> >        > Brick gluster1.linova.de:/glusterfs/sdg1enc
> >        > /brick                                      54991     0
>   Y
> >           1062230
> >        > Brick gluster2.linova.de:/glusterfs/sde1enc
> >        > /brick                                      60812     0
>   Y
> >           20628
> >        > Brick gluster3.linova.de:/glusterfs/sde1enc
> >        > /brick                                      59254     0
>   Y
> >           1627522
> >        > Self-heal Daemon on localhost               N/A       N/A
>   Y
> >           1062249
> >        > Bitrot Daemon on localhost                  N/A       N/A
>   Y
> >           3591335
> >        > Scrubber Daemon on localhost                N/A       N/A
>   Y
> >           3591346
> >        > Self-heal Daemon on gluster2.linova.de      N/A       N/A
>   Y
> >           20645
> >        > Bitrot Daemon on gluster2.linova.de         N/A       N/A
>   Y
> >           987517
> >        > Scrubber Daemon on gluster2.linova.de       N/A       N/A
>   Y
> >           987588
> >        > Self-heal Daemon on gluster3.linova.de      N/A       N/A
>   Y
> >           1627568
> >        > Bitrot Daemon on gluster3.linova.de         N/A       N/A
>   Y
> >           1627543
> >        > Scrubber Daemon on gluster3.linova.de       N/A       N/A
>   Y
> >           1627554
> >        >
> >        > Task Status of Volume gfs_vms
> >        >
> >
> ------------------------------------------------------------------------------
> >        > There are no active volume tasks
> >        >
> >        > root at gluster1:~#
> >        >
> >        > root at gluster1:~# gluster v status gfs_vms detail
> >        >
> >        > Status of volume: gfs_vms
> >        >
> >
> ------------------------------------------------------------------------------
> >        > Brick                : Brick gluster1.linova.de:
> /glusterfs/sde1enc/
> >        brick
> >        > TCP Port             : 58448
> >        > RDMA Port            : 0
> >        > Online               : Y
> >        > Pid                  : 1062218
> >        > File System          : xfs
> >        > Device               : /dev/mapper/sde1enc
> >        > Mount Options        :
> rw,relatime,attr2,inode64,logbufs=8,logbsize=
> >        32k,noquota
> >        > Inode Size           : 512
> >        > Disk Space Free      : 3.6TB
> >        > Total Disk Space     : 3.6TB
> >        > Inode Count          : 390700096
> >        > Free Inodes          : 390699660
> >        >
> >
> ------------------------------------------------------------------------------
> >        > Brick                : Brick gluster2.linova.de:
> /glusterfs/sdc1enc/
> >        brick
> >        > TCP Port             : 50254
> >        > RDMA Port            : 0
> >        > Online               : Y
> >        > Pid                  : 20596
> >        > File System          : xfs
> >        > Device               : /dev/mapper/sdc1enc
> >        > Mount Options        :
> rw,relatime,attr2,inode64,logbufs=8,logbsize=
> >        32k,noquota
> >        > Inode Size           : 512
> >        > Disk Space Free      : 3.6TB
> >        > Total Disk Space     : 3.6TB
> >        > Inode Count          : 390700096
> >        > Free Inodes          : 390699660
> >        >
> >
> ------------------------------------------------------------------------------
> >        > Brick                : Brick gluster3.linova.de:
> /glusterfs/sdc1enc/
> >        brick
> >        > TCP Port             : 52840
> >        > RDMA Port            : 0
> >        > Online               : Y
> >        > Pid                  : 1627513
> >        > File System          : xfs
> >        > Device               : /dev/mapper/sdc1enc
> >        > Mount Options        :
> rw,relatime,attr2,inode64,logbufs=8,logbsize=
> >        32k,noquota
> >        > Inode Size           : 512
> >        > Disk Space Free      : 3.6TB
> >        > Total Disk Space     : 3.6TB
> >        > Inode Count          : 390700096
> >        > Free Inodes          : 390699673
> >        >
> >
> ------------------------------------------------------------------------------
> >        > Brick                : Brick gluster1.linova.de:
> /glusterfs/sdf1enc/
> >        brick
> >        > TCP Port             : 49832
> >        > RDMA Port            : 0
> >        > Online               : Y
> >        > Pid                  : 1062227
> >        > File System          : xfs
> >        > Device               : /dev/mapper/sdf1enc
> >        > Mount Options        :
> rw,relatime,attr2,inode64,logbufs=8,logbsize=
> >        32k,noquota
> >        > Inode Size           : 512
> >        > Disk Space Free      : 3.4TB
> >        > Total Disk Space     : 3.6TB
> >        > Inode Count          : 390700096
> >        > Free Inodes          : 390699632
> >        >
> >
> ------------------------------------------------------------------------------
> >        > Brick                : Brick gluster2.linova.de:
> /glusterfs/sdd1enc/
> >        brick
> >        > TCP Port             : 56095
> >        > RDMA Port            : 0
> >        > Online               : Y
> >        > Pid                  : 20612
> >        > File System          : xfs
> >        > Device               : /dev/mapper/sdd1enc
> >        > Mount Options        :
> rw,relatime,attr2,inode64,logbufs=8,logbsize=
> >        32k,noquota
> >        > Inode Size           : 512
> >        > Disk Space Free      : 3.4TB
> >        > Total Disk Space     : 3.6TB
> >        > Inode Count          : 390700096
> >        > Free Inodes          : 390699632
> >        >
> >
> ------------------------------------------------------------------------------
> >        > Brick                : Brick gluster3.linova.de:
> /glusterfs/sdd1enc/
> >        brick
> >        > TCP Port             : 51252
> >        > RDMA Port            : 0
> >        > Online               : Y
> >        > Pid                  : 1627521
> >        > File System          : xfs
> >        > Device               : /dev/mapper/sdd1enc
> >        > Mount Options        :
> rw,relatime,attr2,inode64,logbufs=8,logbsize=
> >        32k,noquota
> >        > Inode Size           : 512
> >        > Disk Space Free      : 3.4TB
> >        > Total Disk Space     : 3.6TB
> >        > Inode Count          : 390700096
> >        > Free Inodes          : 390699658
> >        >
> >
> ------------------------------------------------------------------------------
> >        > Brick                : Brick gluster1.linova.de:
> /glusterfs/sdg1enc/
> >        brick
> >        > TCP Port             : 54991
> >        > RDMA Port            : 0
> >        > Online               : Y
> >        > Pid                  : 1062230
> >        > File System          : xfs
> >        > Device               : /dev/mapper/sdg1enc
> >        > Mount Options        :
> rw,relatime,attr2,inode64,logbufs=8,logbsize=
> >        32k,noquota
> >        > Inode Size           : 512
> >        > Disk Space Free      : 3.5TB
> >        > Total Disk Space     : 3.6TB
> >        > Inode Count          : 390700096
> >        > Free Inodes          : 390699629
> >        >
> >
> ------------------------------------------------------------------------------
> >        > Brick                : Brick gluster2.linova.de:
> /glusterfs/sde1enc/
> >        brick
> >        > TCP Port             : 60812
> >        > RDMA Port            : 0
> >        > Online               : Y
> >        > Pid                  : 20628
> >        > File System          : xfs
> >        > Device               : /dev/mapper/sde1enc
> >        > Mount Options        :
> rw,relatime,attr2,inode64,logbufs=8,logbsize=
> >        32k,noquota
> >        > Inode Size           : 512
> >        > Disk Space Free      : 3.5TB
> >        > Total Disk Space     : 3.6TB
> >        > Inode Count          : 390700096
> >        > Free Inodes          : 390699629
> >        >
> >
> ------------------------------------------------------------------------------
> >        > Brick                : Brick gluster3.linova.de:
> /glusterfs/sde1enc/
> >        brick
> >        > TCP Port             : 59254
> >        > RDMA Port            : 0
> >        > Online               : Y
> >        > Pid                  : 1627522
> >        > File System          : xfs
> >        > Device               : /dev/mapper/sde1enc
> >        > Mount Options        :
> rw,relatime,attr2,inode64,logbufs=8,logbsize=
> >        32k,noquota
> >        > Inode Size           : 512
> >        > Disk Space Free      : 3.5TB
> >        > Total Disk Space     : 3.6TB
> >        > Inode Count          : 390700096
> >        > Free Inodes          : 390699652
> >        >
> >        > root at gluster1:~#
> >        >
> >        > root at gluster1:~# gluster v info gfs_vms
> >        >
> >        >
> >        > Volume Name: gfs_vms
> >        > Type: Distributed-Replicate
> >        > Volume ID: c70e9806-0463-44ea-818f-a6c824cc5a05
> >        > Status: Started
> >        > Snapshot Count: 0
> >        > Number of Bricks: 3 x 3 = 9
> >        > Transport-type: tcp
> >        > Bricks:
> >        > Brick1: gluster1.linova.de:/glusterfs/sde1enc/brick
> >        > Brick2: gluster2.linova.de:/glusterfs/sdc1enc/brick
> >        > Brick3: gluster3.linova.de:/glusterfs/sdc1enc/brick
> >        > Brick4: gluster1.linova.de:/glusterfs/sdf1enc/brick
> >        > Brick5: gluster2.linova.de:/glusterfs/sdd1enc/brick
> >        > Brick6: gluster3.linova.de:/glusterfs/sdd1enc/brick
> >        > Brick7: gluster1.linova.de:/glusterfs/sdg1enc/brick
> >        > Brick8: gluster2.linova.de:/glusterfs/sde1enc/brick
> >        > Brick9: gluster3.linova.de:/glusterfs/sde1enc/brick
> >        > Options Reconfigured:
> >        > features.scrub: Active
> >        > features.bitrot: on
> >        > cluster.granular-entry-heal: on
> >        > storage.fips-mode-rchecksum: on
> >        > transport.address-family: inet
> >        > nfs.disable: on
> >        > performance.client-io-threads: off
> >        >
> >        > root at gluster1:~#
> >        >
> >        > root at gluster1:~# gluster volume heal gms_vms
> >        > Launching heal operation to perform index self heal on volume
> gms_vms
> >        has
> >        > been unsuccessful:
> >        > Volume gms_vms does not exist
> >        > root at gluster1:~# gluster volume heal gfs_vms
> >        > Launching heal operation to perform index self heal on volume
> gfs_vms
> >        has
> >        > been successful
> >        > Use heal info commands to check status.
> >        > root at gluster1:~# gluster volume heal gfs_vms info
> >        > Brick gluster1.linova.de:/glusterfs/sde1enc/brick
> >        > Status: Connected
> >        > Number of entries: 0
> >        >
> >        > Brick gluster2.linova.de:/glusterfs/sdc1enc/brick
> >        > Status: Connected
> >        > Number of entries: 0
> >        >
> >        > Brick gluster3.linova.de:/glusterfs/sdc1enc/brick
> >        > Status: Connected
> >        > Number of entries: 0
> >        >
> >        > Brick gluster1.linova.de:/glusterfs/sdf1enc/brick
> >        > Status: Connected
> >        > Number of entries: 0
> >        >
> >        > Brick gluster2.linova.de:/glusterfs/sdd1enc/brick
> >        > Status: Connected
> >        > Number of entries: 0
> >        >
> >        > Brick gluster3.linova.de:/glusterfs/sdd1enc/brick
> >        > Status: Connected
> >        > Number of entries: 0
> >        >
> >        > Brick gluster1.linova.de:/glusterfs/sdg1enc/brick
> >        > Status: Connected
> >        > Number of entries: 0
> >        >
> >        > Brick gluster2.linova.de:/glusterfs/sde1enc/brick
> >        > Status: Connected
> >        > Number of entries: 0
> >        >
> >        > Brick gluster3.linova.de:/glusterfs/sde1enc/brick
> >        > Status: Connected
> >        > Number of entries: 0
> >        >
> >        > root at gluster1:~#
> >        >
> >        > This are the warnings and errors I've found in the logs on our
> three
> >        > servers...
> >        >
> >        > * Warnings on gluster1.linova.de:
> >        >
> >        > glusterd.log:[2023-05-31 23:56:00.032233 +0000] W
> >        [glusterd-locks.c:545:glusterd_mgmt_v3_lock] (-->/usr/lib/
> >        x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0x26edf)
> >        [0x7f9b8d19eedf]
> -->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/
> >        mgmt/glusterd.so(+0xcdad2) [0x7f9b8d245ad2] -->/usr/lib/
> >        x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0xcc525)
> >        [0x7f9b8d244525] ) 0-management: Lock for gfs_vms held by
> >        a410159b-12db-4cf7-bad5-c5c817679d1b
> >        > glusterd.log:[2023-06-01 02:22:04.133256 +0000] W
> >        [glusterd-locks.c:545:glusterd_mgmt_v3_lock] (-->/usr/lib/
> >        x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0x26edf)
> >        [0x7f9b8d19eedf]
> -->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/
> >        mgmt/glusterd.so(+0xcdad2) [0x7f9b8d245ad2] -->/usr/lib/
> >        x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0xcc525)
> >        [0x7f9b8d244525] ) 0-management: Lock for gfs_vms held by
> >        a410159b-12db-4cf7-bad5-c5c817679d1b
> >        > glusterd.log:[2023-06-01 02:44:00.046086 +0000] W
> >        [glusterd-locks.c:545:glusterd_mgmt_v3_lock] (-->/usr/lib/
> >        x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0x26edf)
> >        [0x7f9b8d19eedf]
> -->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/
> >        mgmt/glusterd.so(+0xcdad2) [0x7f9b8d245ad2] -->/usr/lib/
> >        x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0xcc525)
> >        [0x7f9b8d244525] ) 0-management: Lock for gfs_vms held by
> >        a410159b-12db-4cf7-bad5-c5c817679d1b
> >        > glusterd.log:[2023-06-01 05:32:00.042698 +0000] W
> >        [glusterd-locks.c:545:glusterd_mgmt_v3_lock] (-->/usr/lib/
> >        x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0x26edf)
> >        [0x7f9b8d19eedf]
> -->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/
> >        mgmt/glusterd.so(+0xcdad2) [0x7f9b8d245ad2] -->/usr/lib/
> >        x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0xcc525)
> >        [0x7f9b8d244525] ) 0-management: Lock for gfs_vms held by
> >        a410159b-12db-4cf7-bad5-c5c817679d1b
> >        > glusterd.log:[2023-06-01 08:18:00.040890 +0000] W
> >        [glusterd-locks.c:545:glusterd_mgmt_v3_lock] (-->/usr/lib/
> >        x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0x26edf)
> >        [0x7f9b8d19eedf]
> -->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/
> >        mgmt/glusterd.so(+0xcdad2) [0x7f9b8d245ad2] -->/usr/lib/
> >        x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0xcc525)
> >        [0x7f9b8d244525] ) 0-management: Lock for gfs_vms held by
> >        a410159b-12db-4cf7-bad5-c5c817679d1b
> >        > glusterd.log:[2023-06-01 11:09:00.020843 +0000] W
> >        [glusterd-locks.c:545:glusterd_mgmt_v3_lock] (-->/usr/lib/
> >        x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0x26edf)
> >        [0x7f9b8d19eedf]
> -->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/
> >        mgmt/glusterd.so(+0xcdad2) [0x7f9b8d245ad2] -->/usr/lib/
> >        x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0xcc525)
> >        [0x7f9b8d244525] ) 0-management: Lock for gfs_vms held by
> >        a410159b-12db-4cf7-bad5-c5c817679d1b
> >        > glusterd.log:[2023-06-01 13:55:00.319414 +0000] W
> >        [glusterd-locks.c:545:glusterd_mgmt_v3_lock] (-->/usr/lib/
> >        x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0x26edf)
> >        [0x7f9b8d19eedf]
> -->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/
> >        mgmt/glusterd.so(+0xcdad2) [0x7f9b8d245ad2] -->/usr/lib/
> >        x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0xcc525)
> >        [0x7f9b8d244525] ) 0-management: Lock for gfs_vms held by
> >        a410159b-12db-4cf7-bad5-c5c817679d1b
> >        >
> >        > * Errors on gluster1.linova.de:
> >        >
> >        > glusterd.log:[2023-05-31 23:56:00.032251 +0000] E [MSGID:
> 106118]
> >        [glusterd-syncop.c:1904:gd_sync_task_begin] 0-management: Unable
> to
> >        acquire lock for gfs_vms
> >        > glusterd.log:[2023-06-01 02:22:04.133274 +0000] E [MSGID:
> 106118]
> >        [glusterd-syncop.c:1904:gd_sync_task_begin] 0-management: Unable
> to
> >        acquire lock for gfs_vms
> >        > glusterd.log:[2023-06-01 02:44:00.046099 +0000] E [MSGID:
> 106118]
> >        [glusterd-syncop.c:1904:gd_sync_task_begin] 0-management: Unable
> to
> >        acquire lock for gfs_vms
> >        > glusterd.log:[2023-06-01 05:32:00.042714 +0000] E [MSGID:
> 106118]
> >        [glusterd-syncop.c:1904:gd_sync_task_begin] 0-management: Unable
> to
> >        acquire lock for gfs_vms
> >        > glusterd.log:[2023-06-01 08:18:00.040914 +0000] E [MSGID:
> 106118]
> >        [glusterd-syncop.c:1904:gd_sync_task_begin] 0-management: Unable
> to
> >        acquire lock for gfs_vms
> >        > glusterd.log:[2023-06-01 11:09:00.020853 +0000] E [MSGID:
> 106118]
> >        [glusterd-syncop.c:1904:gd_sync_task_begin] 0-management: Unable
> to
> >        acquire lock for gfs_vms
> >        > glusterd.log:[2023-06-01 13:21:57.752337 +0000] E [MSGID:
> 106525]
> >        [glusterd-op-sm.c:4248:glusterd_dict_set_volid] 0-management:
> Volume
> >        detail does not exist
> >        > glusterd.log:[2023-06-01 13:21:57.752363 +0000] E [MSGID:
> 106289]
> >        [glusterd-syncop.c:1947:gd_sync_task_begin] 0-management: Failed
> to
> >        build payload for operation 'Volume Status'
> >        > glusterd.log:[2023-06-01 13:55:00.319432 +0000] E [MSGID:
> 106118]
> >        [glusterd-syncop.c:1904:gd_sync_task_begin] 0-management: Unable
> to
> >        acquire lock for gfs_vms
> >        >
> >        > * Warnings on gluster2.linova.de:
> >        >
> >        > [2023-05-31 20:26:37.975658 +0000] W
> >        [glusterd-locks.c:545:glusterd_mgmt_v3_lock] (-->/usr/lib/
> >        x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0x26edf)
> >        [0x7f4ec1b5bedf]
> -->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/
> >        mgmt/glusterd.so(+0xcdad2) [0x7f4ec1c02ad2] -->/usr/lib/
> >        x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0xcc525)
> >        [0x7f4ec1c01525] ) 0-management: Lock for gfs_vms held by
> >        a410159b-12db-4cf7-bad5-c5c817679d1b
> >        >
> >        > * Errors on gluster2.linova.de:
> >        >
> >        > [2023-05-31 20:26:37.975831 +0000] E [MSGID: 106118]
> >        [glusterd-syncop.c:1904:gd_sync_task_begin] 0-management: Unable
> to
> >        acquire lock for gfs_vms
> >        >
> >        > * Warnings on gluster3.linova.de:
> >        >
> >        > [2023-05-31 22:26:44.245188 +0000] W
> >        [glusterd-locks.c:545:glusterd_mgmt_v3_lock] (-->/usr/lib/
> >        x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0x26edf)
> >        [0x7f5f8ad5bedf]
> -->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/
> >        mgmt/glusterd.so(+0xcdad2) [0x7f5f8ae02ad2] -->/usr/lib/
> >        x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0xcc525)
> >        [0x7f5f8ae01525] ) 0-management: Lock for gfs_vms held by
> >        4b0a8298-9284-4a24-8de0-f5c25aafb5c7
> >        > [2023-05-31 22:58:20.000849 +0000] W
> >        [glusterd-locks.c:545:glusterd_mgmt_v3_lock] (-->/usr/lib/
> >        x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0x26edf)
> >        [0x7f5f8ad5bedf]
> -->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/
> >        mgmt/glusterd.so(+0xcdad2) [0x7f5f8ae02ad2] -->/usr/lib/
> >        x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0xcc525)
> >        [0x7f5f8ae01525] ) 0-management: Lock for gfs_vms held by
> >        4b0a8298-9284-4a24-8de0-f5c25aafb5c7
> >        > [2023-06-01 01:26:19.990639 +0000] W
> >        [glusterd-locks.c:545:glusterd_mgmt_v3_lock] (-->/usr/lib/
> >        x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0x26edf)
> >        [0x7f5f8ad5bedf]
> -->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/
> >        mgmt/glusterd.so(+0xcdad2) [0x7f5f8ae02ad2] -->/usr/lib/
> >        x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0xcc525)
> >        [0x7f5f8ae01525] ) 0-management: Lock for gfs_vms held by
> >        4b0a8298-9284-4a24-8de0-f5c25aafb5c7
> >        > [2023-06-01 07:09:44.252654 +0000] W
> >        [glusterd-locks.c:545:glusterd_mgmt_v3_lock] (-->/usr/lib/
> >        x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0x26edf)
> >        [0x7f5f8ad5bedf]
> -->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/
> >        mgmt/glusterd.so(+0xcdad2) [0x7f5f8ae02ad2] -->/usr/lib/
> >        x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0xcc525)
> >        [0x7f5f8ae01525] ) 0-management: Lock for gfs_vms held by
> >        4b0a8298-9284-4a24-8de0-f5c25aafb5c7
> >        > [2023-06-01 07:36:49.803972 +0000] W
> >        [glusterd-locks.c:545:glusterd_mgmt_v3_lock] (-->/usr/lib/
> >        x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0x26edf)
> >        [0x7f5f8ad5bedf]
> -->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/
> >        mgmt/glusterd.so(+0xcdad2) [0x7f5f8ae02ad2] -->/usr/lib/
> >        x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0xcc525)
> >        [0x7f5f8ae01525] ) 0-management: Lock for gfs_vms held by
> >        4b0a8298-9284-4a24-8de0-f5c25aafb5c7
> >        > [2023-06-01 07:42:20.003401 +0000] W
> >        [glusterd-locks.c:545:glusterd_mgmt_v3_lock] (-->/usr/lib/
> >        x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0x26edf)
> >        [0x7f5f8ad5bedf]
> -->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/
> >        mgmt/glusterd.so(+0xcdad2) [0x7f5f8ae02ad2] -->/usr/lib/
> >        x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0xcc525)
> >        [0x7f5f8ae01525] ) 0-management: Lock for gfs_vms held by
> >        4b0a8298-9284-4a24-8de0-f5c25aafb5c7
> >        > [2023-06-01 08:43:55.561333 +0000] W
> >        [glusterd-locks.c:545:glusterd_mgmt_v3_lock] (-->/usr/lib/
> >        x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0x26edf)
> >        [0x7f5f8ad5bedf]
> -->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/
> >        mgmt/glusterd.so(+0xcdad2) [0x7f5f8ae02ad2] -->/usr/lib/
> >        x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0xcc525)
> >        [0x7f5f8ae01525] ) 0-management: Lock for gfs_vms held by
> >        7a63d6a0-feae-4349-b787-d0fc76b3db3a
> >        > [2023-06-01 13:07:04.152591 +0000] W
> >        [glusterd-locks.c:545:glusterd_mgmt_v3_lock] (-->/usr/lib/
> >        x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0x26edf)
> >        [0x7f5f8ad5bedf]
> -->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/
> >        mgmt/glusterd.so(+0xcdad2) [0x7f5f8ae02ad2] -->/usr/lib/
> >        x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0xcc525)
> >        [0x7f5f8ae01525] ) 0-management: Lock for gfs_vms held by
> >        a410159b-12db-4cf7-bad5-c5c817679d1b
> >        >
> >        > * Errors on gluster3.linova.de:
> >        >
> >        > [2023-05-31 22:26:44.245214 +0000] E [MSGID: 106118]
> >        [glusterd-syncop.c:1904:gd_sync_task_begin] 0-management: Unable
> to
> >        acquire lock for gfs_vms
> >        > [2023-05-31 22:58:20.000858 +0000] E [MSGID: 106118]
> >        [glusterd-syncop.c:1904:gd_sync_task_begin] 0-management: Unable
> to
> >        acquire lock for gfs_vms
> >        > [2023-06-01 01:26:19.990648 +0000] E [MSGID: 106118]
> >        [glusterd-syncop.c:1904:gd_sync_task_begin] 0-management: Unable
> to
> >        acquire lock for gfs_vms
> >        > [2023-06-01 07:09:44.252671 +0000] E [MSGID: 106118]
> >        [glusterd-syncop.c:1904:gd_sync_task_begin] 0-management: Unable
> to
> >        acquire lock for gfs_vms
> >        > [2023-06-01 07:36:49.803986 +0000] E [MSGID: 106118]
> >        [glusterd-syncop.c:1904:gd_sync_task_begin] 0-management: Unable
> to
> >        acquire lock for gfs_vms
> >        > [2023-06-01 07:42:20.003411 +0000] E [MSGID: 106118]
> >        [glusterd-syncop.c:1904:gd_sync_task_begin] 0-management: Unable
> to
> >        acquire lock for gfs_vms
> >        > [2023-06-01 08:43:55.561349 +0000] E [MSGID: 106118]
> >        [glusterd-syncop.c:1904:gd_sync_task_begin] 0-management: Unable
> to
> >        acquire lock for gfs_vms
> >        > [2023-06-01 13:07:04.152610 +0000] E [MSGID: 106118]
> >        [glusterd-syncop.c:1904:gd_sync_task_begin] 0-management: Unable
> to
> >        acquire lock for gfs_vms
> >        >
> >        > Best regards and thanks again for any helpfull hint!
> >        >
> >        >    Chris
> >        > ________
> >        >
> >        >
> >        >
> >        > Community Meeting Calendar:
> >        >
> >        > Schedule -
> >        > Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
> >        > Bridge: https://meet.google.com/cpu-eiue-hvk
> >        > Gluster-users mailing list
> >        > Gluster-users at gluster.org
> >        > https://lists.gluster.org/mailman/listinfo/gluster-users
> >        ________
> >
> >
> >
> >        Community Meeting Calendar:
> >
> >        Schedule -
> >        Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
> >        Bridge: https://meet.google.com/cpu-eiue-hvk
> >        Gluster-users mailing list
> >        Gluster-users at gluster.org
> >        https://lists.gluster.org/mailman/listinfo/gluster-users
> >
> >
> >    Ce message et toutes les pièces jointes (ci-après le “message”) sont
> >    établis à l’intention exclusive de ses destinataires et sont
> confidentiels.
> >    Si vous recevez ce message par erreur, merci de le détruire et d’en
> avertir
> >    immédiatement l’expéditeur. Toute utilisation de ce message non
> conforme a
> >    sa destination, toute diffusion ou toute publication, totale ou
> partielle,
> >    est interdite, sauf autorisation expresse. L’internet ne permettant
> pas
> >    d’assurer l’intégrité de ce message . Interactiv-group (et ses
> filiales)
> >    décline(nt) toute responsabilité au titre de ce message, dans
> l’hypothèse
> >    ou il aurait été modifié. IT, ES, UK.
> >
> >    ________
> >
> >
> >
> >    Community Meeting Calendar:
> >
> >    Schedule -
> >    Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
> >    Bridge: https://meet.google.com/cpu-eiue-hvk
> >    Gluster-users mailing list
> >    Gluster-users at gluster.org
> >    https://lists.gluster.org/mailman/listinfo/gluster-users
> >
>
> >________
> >
> >
> >
> >Community Meeting Calendar:
> >
> >Schedule -
> >Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
> >Bridge: https://meet.google.com/cpu-eiue-hvk
> >Gluster-users mailing list
> >Gluster-users at gluster.org
> >https://lists.gluster.org/mailman/listinfo/gluster-users
>
>
> --
> Christian Schöpplein
> ------------------------------------------------------------
> IT and Operations
>
> Linova Software GmbH         Phone: +49 (0)89 4524668-39
> Ungererstraße 129            Fax:   +49 (0)89 4524668-99
> 80805 München
> http://www.linova.de  Email: christian.schoepplein at linova.de
> ------------------------------------------------------------
> Geschäftsführer:
> Dr. Andreas Löhr, Tobias Weishäupl
> Registergericht:
> Amtsgericht München, HRB 172890
> USt-IdNr.: DE259281353
>
> ________
>
>
>
> Community Meeting Calendar:
>
> Schedule -
> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
> Bridge: https://meet.google.com/cpu-eiue-hvk
> Gluster-users mailing list
> Gluster-users at gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20230605/a0c17ebc/attachment.html>


More information about the Gluster-users mailing list