[Gluster-users] Very slow ls
Florian Oppermann
gluster-users at flopwelt.de
Tue Aug 4 06:32:29 UTC 2015
> As you are in replicate mode, all write will be send synchronously to all bricks, and in your case to a single hdd.
I thought that every file will be sent to 2 bricks synchronously but if
I write several files they are distributed between the three pairs of
bricks. Therefore the performance should become better with more bricks
(note that the 3×2 bricks are not final but only a test setup, more
bricks will be added when going to production).
> For sure I wouldn't go for 60+ users with this setup, maybe except if these hdd are ssd
What would be a suitable setup? Or: Which use cases are typical for
Gluster setups? Maybe I misunderstood the target of Gluster.
Best regards
Florian
On 04.08.2015 07:25, Mathieu Chateau wrote:
> Hello,
>
> As you are in replicate mode, all write will be send synchronously to
> all bricks, and in your case to a single hdd.
>
> Writes: you are going to have same perf as 1 single hdd (best case
> possible, you will have less)
> read: all brick will be queried for metadata, one will send the file (if
> I am correct)
>
> For sure I wouldn't go for 60+ users with this setup, maybe except if
> these hdd are ssd
>
> just my 2 cents
>
> Cordialement,
> Mathieu CHATEAU
> http://www.lotp.fr
>
> 2015-08-03 23:29 GMT+02:00 Florian Oppermann <gluster-users at flopwelt.de
> <mailto:gluster-users at flopwelt.de>>:
>
> > If starting setup right now, you should start with current version (3.7.X)
>
> Is 3.7 stable? I have 60+ potential users and dont want to risk too
> much. ;-)
>
> > Filesystem
>
> XFS partitions on all bricks
>
> > network type (lan, VM...)
>
> Gigabit LAN
>
> > where is client (same lan?)
>
> Yep
>
> > MTU
>
> 1500
>
> > storage (raid, # of disks...)
>
> The bricks are all on separate servers. On each is a XFS partition on a
> single HDD (together with other partitions for system etc.). All in all
> there are currently seven machines involved.
>
> I just noticed that on all servers the
> /var/log/glusterfs/etc-glusterfs-glusterd.vol.log is full of
> messages like
>
> > [2015-08-03 21:24:59.879820] W [socket.c:620:__socket_rwv]
> 0-management: readv on
> /var/run/a91fc43b47272ffaace2a6989e7b5e85.socket failed (Invalid
> argument)
>
> I assume this to be part of the problem…
>
> Regards :-)
> Florian
>
> On 03.08.2015 22 <tel:03.08.2015%2022>:41, Mathieu Chateau wrote:
> > Hello,
> >
> > If starting setup right now, you should start with current version (3.7.X)
> >
> > We need more data/context as you were able to feed 150GB before having
> > issue.
> >
> > Info:
> > Filesystem
> > network type (lan, VM...)
> > where is client (same lan?)
> > MTU
> > storage (raid, # of disks...)
> >
> > Cordialement,
> > Mathieu CHATEAU
> > http://www.lotp.fr
> >
> > 2015-08-03 21:44 GMT+02:00 Florian Oppermann <gluster-users at flopwelt.de <mailto:gluster-users at flopwelt.de>
> > <mailto:gluster-users at flopwelt.de
> <mailto:gluster-users at flopwelt.de>>>:
> >
> > Dear Gluster users,
> >
> > after setting up a distributed replicated volume (3x2 bricks) on gluster
> > 3.6.4 on Ubuntu systems and populating it with some data (about 150 GB
> > in 20k files) I experience extreme delay when navigating through
> > directories or trying to ls the contents (actually the process seems to
> > hang completely now until I kill the /usr/sbin/glusterfs process on the
> > mounting machine).
> >
> > Is there some common misconfiguration or any performance tuning option
> > that I could try?
> >
> > I mount via automount with fstype=glusterfs option (using the native
> > fuse mount).
> >
> > Any tips?
> >
> > Best regards,
> > Florian Oppermann
> > _______________________________________________
> > Gluster-users mailing list
> > Gluster-users at gluster.org <mailto:Gluster-users at gluster.org>
> <mailto:Gluster-users at gluster.org <mailto:Gluster-users at gluster.org>>
> > http://www.gluster.org/mailman/listinfo/gluster-users
> >
> >
>
>
More information about the Gluster-users
mailing list