[Gluster-users] Write Speed unusually slow when both bricks are online

Raghavendra Gowdappa rgowdapp at redhat.com
Thu Apr 11 01:15:18 UTC 2019


I would need following data:

* client and brick volume profile -
https://glusterdocs.readthedocs.io/en/latest/Administrator%20Guide/Performance%20Testing/
* cmdline of exact test you were running

regards,

On Wed, Apr 10, 2019 at 9:02 PM Jeff Forbes <jeff.forbes at mail.nacon.com>
wrote:

> I have two CentOS-6 servers running version 3.12.14 of gluster-server.
> Each server as one brick and they are configured to replicate between
> the two bricks.
>
> I also have two CentOS-6 servers running version 3.12.2-18 of
> glusterfs.
>
> These servers use a separate VLAN.  Each server has two bonded 1 Gbps
> NICs to communicate the gluster traffic. File transfer speeds between
> these servers using rsync approaches 100 MBps.
>
> The client servers mount the gluster volume using this fstab entry:
> 192.168.40.30:gv0  /store  glusterfs defaults,
> attribute-timeout=600,entry-timeout=600,negative-timeout=600,fopen-keep-cache,use-readdirp=no,log-level=WARNING
>      1 2
>
> Reading data from the servers to the clients is similar to the rsync
> speed. The problem is that writing from the clients to the mounted
> gluster volume is less than 8 MB/s and fluctuates from less than 500
> kB/s to 8 MB/s, as measured by the pv command. Using rsync, the speed
> fluctuates between 2 and 5 MBps.
>
> When the bonded nics on one of the gluster servers is shut down, the
> write speed to the remaining online brick is now similar to the read
> speed
>
> I can only assume that there is something wrong in my configuration,
> since a greater than 10-fold decrease in write speed when the bricks
> are replicating makes for an unusable system.
>
>
> Does anyone have any ideas what the problem may be?
>
>
> Server volume configuration:
> > sudo gluster volume  info
>
> Volume Name: gv0
> Type: Replicate
> Volume ID: d96bbb99-f264-4655-95ff-f9f05ca9ff55
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 1 x 2 = 2
> Transport-type: tcp
> Bricks:
> Brick1: 192.168.40.20:/export/scsi/brick
> Brick2: 192.168.40.30:/export/scsi/brick
> Options Reconfigured:
> performance.cache-size: 1GB
> performance.readdir-ahead: on
> features.cache-invalidation: on
> features.cache-invalidation-timeout: 600
> performance.stat-prefetch: on
> performance.cache-samba-metadata: on
> performance.cache-invalidation: on
> performance.md-cache-timeout: 600
> network.inode-lru-limit: 250000
> performance.cache-refresh-timeout: 60
> performance.read-ahead: disable
> performance.parallel-readdir: on
> performance.write-behind-window-size: 4MB
> performance.io-thread-count: 64
> performance.client-io-threads: on
> performance.quick-read: on
> performance.flush-behind: on
> performance.write-behind: on
> nfs.disable: on
> client.event-threads: 3
> server.event-threads: 3
> server.allow-insecure: on
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20190411/2dd8f1f9/attachment.html>


More information about the Gluster-users mailing list