[Gluster-users] Very slow ls
Florian Oppermann
gluster-users at flopwelt.de
Tue Aug 4 18:22:47 UTC 2015
Small update: Stopping and restarting the volume made it accessible
again with acceptable performance. It is significantly slower than the
local harddrive when it comes to metadata access: find . | wc -l took
almost 2 minutes for 36k files with only 2% cpu time. But that is okay.
Nevertheless the restart of the volume didn’t change anything about the
warnings in the brick logs (every 3 seconds):
> [2015-08-04 18:14:08.380932] W [socket.c:620:__socket_rwv] 0-management: readv on /var/run/107779775a9751545b4bf351f04ecf1c.socket failed (Invalid argument)
On the mounting machine the <volname>.log shows many lines like
> [2015-08-04 18:05:53.601241] I [afr-self-heal-entry.c:561:afr_selfheal_entry_do] 0-itpscr-replicate-0: performing entry selfheal on 6b1d153f-d617-45db-a9cc-90d54dd67c45
> [2015-08-04 18:05:53.684695] I [afr-self-heal-common.c:476:afr_log_selfheal] 0-itpscr-replicate-0: Completed entry selfheal on 6b1d153f-d617-45db-a9cc-90d54dd67c45. source=1 sinks=0
while the above find command was running. volume heal <volname> info
show Number of entries: 0 for all bricks. I assume this is good.
Any hints regarding the error messages?
Best regards,
Florian
On 03.08.2015 23:29, Florian Oppermann wrote:
>> If starting setup right now, you should start with current version (3.7.X)
>
> Is 3.7 stable? I have 60+ potential users and dont want to risk too
> much. ;-)
>
>> Filesystem
>
> XFS partitions on all bricks
>
>> network type (lan, VM...)
>
> Gigabit LAN
>
>> where is client (same lan?)
>
> Yep
>
>> MTU
>
> 1500
>
>> storage (raid, # of disks...)
>
> The bricks are all on separate servers. On each is a XFS partition on a
> single HDD (together with other partitions for system etc.). All in all
> there are currently seven machines involved.
>
> I just noticed that on all servers the
> /var/log/glusterfs/etc-glusterfs-glusterd.vol.log is full of messages like
>
>> [2015-08-03 21:24:59.879820] W [socket.c:620:__socket_rwv] 0-management: readv on /var/run/a91fc43b47272ffaace2a6989e7b5e85.socket failed (Invalid argument)
>
> I assume this to be part of the problem…
>
> Regards :-)
> Florian
>
> On 03.08.2015 22:41, Mathieu Chateau wrote:
>> Hello,
>>
>> If starting setup right now, you should start with current version (3.7.X)
>>
>> We need more data/context as you were able to feed 150GB before having
>> issue.
>>
>> Info:
>> Filesystem
>> network type (lan, VM...)
>> where is client (same lan?)
>> MTU
>> storage (raid, # of disks...)
>>
>> Cordialement,
>> Mathieu CHATEAU
>> http://www.lotp.fr
>>
>> 2015-08-03 21:44 GMT+02:00 Florian Oppermann <gluster-users at flopwelt.de
>> <mailto:gluster-users at flopwelt.de>>:
>>
>> Dear Gluster users,
>>
>> after setting up a distributed replicated volume (3x2 bricks) on gluster
>> 3.6.4 on Ubuntu systems and populating it with some data (about 150 GB
>> in 20k files) I experience extreme delay when navigating through
>> directories or trying to ls the contents (actually the process seems to
>> hang completely now until I kill the /usr/sbin/glusterfs process on the
>> mounting machine).
>>
>> Is there some common misconfiguration or any performance tuning option
>> that I could try?
>>
>> I mount via automount with fstype=glusterfs option (using the native
>> fuse mount).
>>
>> Any tips?
>>
>> Best regards,
>> Florian Oppermann
>> _______________________________________________
>> Gluster-users mailing list
>> Gluster-users at gluster.org <mailto:Gluster-users at gluster.org>
>> http://www.gluster.org/mailman/listinfo/gluster-users
>>
>>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
>
More information about the Gluster-users
mailing list