[Gluster-users] rsync to gluster mount: self-heal and bad performance

Tiemen Ruiten t.ruiten at rdmedia.com
Fri Nov 13 08:56:30 UTC 2015


Hello Ernie, list,

No, that's not the case. The volume is mounted through glusterfs-fuse - on
the same server running one of the bricks. The fstab:

# /etc/fstab
# Created by anaconda on Tue Aug 18 18:10:49 2015
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
UUID=56778fed-bf3f-435e-8c32-edaa8c707f29 /                       xfs defaults
       0 0
UUID=a44e32ed-cfbe-4ba0-896f-1efff9397ba1 /boot                   xfs defaults
       0 0
UUID=a344d2bc-266d-4905-85b1-fbb7fe927659 swap
swap defaults
       0 0
/dev/vdb1  /data/brick   xfs defaults 1 2
iron2:/lpxassets  /mnt/lpxassets glusterfs _netdev,acl 0 0




On 12 November 2015 at 22:50, Ernie Dunbar <maillist at lightspeed.ca> wrote:

> Hi Tiemen
>
> It sounds like you're trying to rsync files onto your Gluster server,
> rather than to the Gluster filesystem. You want to copy these files into
> the mounted filesystem (typically on some other system than the Gluster
> servers), because Gluster is designed to handle it that way.
>
> I can't remember the nitty gritty details about why this is, but I've made
> this mistake before as well. Hope that helps. :)
>
>
> On 2015-11-12 11:31, Tiemen Ruiten wrote:
>
>> Hello,
>>
>> While rsyncing to a directory mounted through glusterfs fuse,
>> performance is very bad and it appears every synced file generates a
>> (metadata) self-heal.
>>
>> The volume is mounted with option acl and acl's are set on a
>> subdirectory.
>>
>> Setup is as follows:
>>
>> Two Centos 7 VM's (KVM), with Gluster 3.7.6 and one physical CentOS 6
>> node, also Gluster 3.7.6. Physical node functions as arbiter. So it's
>> a replica 3 arbiter 1 volume. The bricks are LVM's with XFS
>> filesystem.
>>
>> While I don't think I should expect top performance for rsync on
>> Gluster, I wouldn't expect every file synced to trigger a self-heal.
>> Anything I can do to improve this? Should I file a bug?
>>
>> Another thing that looks related, I see a lot of these messages,
>> especially when doing IO:
>>
>> [2015-11-12 19:25:42.185904] I [dict.c:473:dict_get]
>>
>> (-->/usr/lib64/glusterfs/3.7.6/xlator/debug/io-stats.so(io_stats_lookup_cbk+0x121)
>> [0x7fdcc2d31161]
>>
>> -->/usr/lib64/glusterfs/3.7.6/xlator/system/posix-acl.so(posix_acl_lookup_cbk+0x242)
>> [0x7fdcc2b1b212] -->/lib64/libglusterfs.so.0(dict_get+0xac)
>> [0x7fdcd5e770cc] ) 0-dict: !this || key=system.posix_acl_default
>> [Invalid argument]
>>
>> --
>>
>> Tiemen Ruiten
>> Systems Engineer
>> R&D Media
>>
>> _______________________________________________
>> Gluster-users mailing list
>> Gluster-users at gluster.org
>> http://www.gluster.org/mailman/listinfo/gluster-users
>>
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
>



-- 
Tiemen Ruiten
Systems Engineer
R&D Media
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20151113/77e487d1/attachment.html>


More information about the Gluster-users mailing list