[Gluster-users] Please remove me from mailing list

Saju Mohammed Noohu sajmoham at redhat.com
Tue Oct 6 06:36:43 UTC 2020


Have sent the unsubscribe request, please click on the link received in
your email and confirm the same.

Thanks
Saju


On Tue, Oct 6, 2020 at 3:00 AM Tami Greene <tmgreene364 at gmail.com> wrote:

> I have attempted to remove myself, but continue due to receive "illegal
> email address" reply with my attempt. Thank you.
>
> On Mon, Oct 5, 2020, 4:37 PM Felix Kölzow <felix.koelzow at gmx.de> wrote:
>
> > Dear Matthew,
> >
> >
> > from my current experience with gluster geo-replication and since this is
> > a key in your backup-procedure
> >
> > (this is how it seems to me), I would come up with a new one, just to be
> > sure.
> >
> >
> > Regards,
> >
> > Felix
> >
> >
> > On 05/10/2020 22:28, Matthew Benstead wrote:
> >
> > Hmm... Looks like I forgot to set the xattr's to sa - I left them as
> > default.
> >
> > [root at pcic-backup01 ~]# zfs get xattr pcic-backup01-zpool
> > NAME                 PROPERTY  VALUE  SOURCE
> > pcic-backup01-zpool  xattr     on     default
> >
> > [root at pcic-backup02 ~]# zfs get xattr pcic-backup02-zpool
> > NAME                 PROPERTY  VALUE  SOURCE
> > pcic-backup02-zpool  xattr     on     default
> >
> > I wonder if I can change them and continue, or if I need to blow away the
> > zpool and start over?
> >
> > Thanks,
> >  -Matthew
> >
> > --
> > Matthew Benstead
> > System Administrator
> > Pacific Climate Impacts Consortium <https://pacificclimate.org/>
> > University of Victoria, UH1
> > PO Box 1800, STN CSC
> > Victoria, BC, V8W 2Y2
> > Phone: +1-250-721-8432
> > Email: matthewb at uvic.ca
> > On 10/5/20 12:53 PM, Felix Kölzow wrote:
> >
> > Dear Matthew,
> >
> > this is our configuration:
> >
> > zfs get all mypool
> >
> > mypool  xattr                           sa
> > local
> > mypool  acltype                         posixacl
> > local
> >
> >
> > Something more to consider?
> >
> >
> > Regards,
> >
> > Felix
> >
> >
> >
> > On 05/10/2020 21:11, Matthew Benstead wrote:
> >
> > Thanks Felix - looking through some more of the logs I may have found the
> > reason...
> >
> > From
> >
> /var/log/glusterfs/geo-replication/storage_10.0.231.81_pcic-backup/mnt-data-storage_a-storage.log
> >
> > [2020-10-05 18:13:35.736838] E [fuse-bridge.c:4288:fuse_xattr_cbk]
> > 0-glusterfs-fuse: extended attribute not supported by the backend storage
> > [2020-10-05 18:18:53.885591] E [fuse-bridge.c:4288:fuse_xattr_cbk]
> > 0-glusterfs-fuse: extended attribute not supported by the backend storage
> > [2020-10-05 18:22:14.405234] E [fuse-bridge.c:4288:fuse_xattr_cbk]
> > 0-glusterfs-fuse: extended attribute not supported by the backend storage
> > [2020-10-05 18:25:53.971679] E [fuse-bridge.c:4288:fuse_xattr_cbk]
> > 0-glusterfs-fuse: extended attribute not supported by the backend storage
> > [2020-10-05 18:31:44.571557] E [fuse-bridge.c:4288:fuse_xattr_cbk]
> > 0-glusterfs-fuse: extended attribute not supported by the backend storage
> > [2020-10-05 18:36:36.508772] E [fuse-bridge.c:4288:fuse_xattr_cbk]
> > 0-glusterfs-fuse: extended attribute not supported by the backend storage
> > [2020-10-05 18:40:10.401055] E [fuse-bridge.c:4288:fuse_xattr_cbk]
> > 0-glusterfs-fuse: extended attribute not supported by the backend storage
> > [2020-10-05 18:42:57.833536] E [fuse-bridge.c:4288:fuse_xattr_cbk]
> > 0-glusterfs-fuse: extended attribute not supported by the backend storage
> > [2020-10-05 18:45:19.691953] E [fuse-bridge.c:4288:fuse_xattr_cbk]
> > 0-glusterfs-fuse: extended attribute not supported by the backend storage
> > [2020-10-05 18:48:26.478532] E [fuse-bridge.c:4288:fuse_xattr_cbk]
> > 0-glusterfs-fuse: extended attribute not supported by the backend storage
> > [2020-10-05 18:52:24.466914] E [fuse-bridge.c:4288:fuse_xattr_cbk]
> > 0-glusterfs-fuse: extended attribute not supported by the backend storage
> >
> >
> > The slave nodes are running gluster on top of ZFS, but I had configured
> > ACLs - is there something else missing to make this work with ZFS?
> >
> > [root at pcic-backup01 ~]# gluster volume info
> >
> > Volume Name: pcic-backup
> > Type: Distribute
> > Volume ID: 7af8a424-f4b6-4405-bba1-0dbafb0fa231
> > Status: Started
> > Snapshot Count: 0
> > Number of Bricks: 2
> > Transport-type: tcp
> > Bricks:
> > Brick1: 10.0.231.81:/pcic-backup01-zpool/brick
> > Brick2: 10.0.231.82:/pcic-backup02-zpool/brick
> > Options Reconfigured:
> > network.ping-timeout: 10
> > performance.cache-size: 256MB
> > server.event-threads: 4
> > client.event-threads: 4
> > cluster.lookup-optimize: on
> > performance.parallel-readdir: on
> > performance.readdir-ahead: on
> > features.quota-deem-statfs: on
> > features.inode-quota: on
> > features.quota: on
> > transport.address-family: inet
> > nfs.disable: on
> > features.read-only: off
> > performance.open-behind: off
> >
> >
> > [root at pcic-backup01 ~]# zfs get acltype pcic-backup01-zpool
> > NAME                 PROPERTY  VALUE     SOURCE
> > pcic-backup01-zpool  acltype   posixacl  local
> >
> > [root at pcic-backup01 ~]# grep "pcic-backup0" /proc/mounts
> > pcic-backup01-zpool /pcic-backup01-zpool zfs rw,seclabel,xattr,posixacl
> 0 0
> >
> >
> > [root at pcic-backup02 ~]# zfs get acltype pcic-backup02-zpool
> > NAME                 PROPERTY  VALUE     SOURCE
> > pcic-backup02-zpool  acltype   posixacl  local
> >
> > [root at pcic-backup02 ~]# grep "pcic-backup0" /proc/mounts
> > pcic-backup02-zpool /pcic-backup02-zpool zfs rw,seclabel,xattr,posixacl
> 0 0
> >
> > Thanks,
> >  -Matthew
> >
> >
> > --
> > Matthew Benstead
> > System Administrator
> > Pacific Climate Impacts Consortium <https://pacificclimate.org/>
> > University of Victoria, UH1
> > PO Box 1800, STN CSC
> > Victoria, BC, V8W 2Y2
> > Phone: +1-250-721-8432
> > Email: matthewb at uvic.ca
> > On 10/5/20 1:39 AM, Felix Kölzow wrote:
> >
> > Dear Matthew,
> >
> >
> > can you provide more information regarding to the geo-replication brick
> > logs.
> >
> > These files area also located in:
> >
> > /var/log/glusterfs/geo-replication/storage_10.0.231.81_pcic-backup/
> >
> >
> > Usually, these log files are more precise to figure out the root cause
> > of the error.
> >
> > Additionally, it is also worth to look at the log-files on the slave side
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20201006/9d067f6e/attachment.html>


More information about the Gluster-users mailing list