[Gluster-users] Arbiter performance issue
Ravishankar N
ravishankar at redhat.com
Tue Apr 5 11:01:52 UTC 2016
On 03/30/2016 06:36 AM, Ravishankar N wrote:
> On 03/30/2016 01:03 AM, Russell Purinton wrote:
>> Hi all, sorry for 2 threads today, but I felt like this deserved a
>> separate thread…
>>
>> I was trying to replace my replica 2 volumes with replica 3 arbiter 1
>> volumes… The new volumes though are 10x slower on direct writes than
>> their replica 2 counterparts. Im wondering if this is to be expected
>> or if I might have done something wrong? I confirmed that no data
>> is being written into the arbiter bricks, just meta data.
>>
>>
>
> It does look like a bug Russel. Another user had reported the same
> behavior. I'll take a look and update.
> Thanks,
> Ravi
>
Hi,
I've raised https://bugzilla.redhat.com/show_bug.cgi?id=1324004 and sent
a fix @ http://review.gluster.org/#/c/13906/.
Once it gets accepted in master, I'll backport it to 3.7 branch. If
everything goes well, it should make it to the 3.7.11 release. Feel free
to test the patch if you like.
Thanks,
Ravi
>>
>> Here’s the tests I ran. I did the dd tests multiple times with
>> different file names, and they all had the same speeds ….
>>
>> [root at fs134 wtg002]# dd if=/dev/zero of=test bs=1M count=100 oflag=direct
>> 100+0 records in
>> 100+0 records out
>> 104857600 bytes (105 MB) copied, 9.49974 s, 11.0 MB/s
>> [root at fs134 wtg002]# cd ..
>> [root at fs134 home]# cd wtg001
>> [root at fs134 wtg001]# dd if=/dev/zero of=test bs=1M count=100 oflag=direct
>> 100+0 records in
>> 100+0 records out
>> 104857600 bytes (105 MB) copied, 0.888929 s, 118 MB/s
>> [root at fs134 wtg001]# gluster volume info wtg001
>>
>> Volume Name: wtg001
>> Type: Distributed-Replicate
>> Volume ID: 53179cfe-9896-4c94-9f1d-01dd474e027e
>> Status: Started
>> Number of Bricks: 2 x 2 = 4
>> Transport-type: tcp
>> Bricks:
>> Brick1: xs141:/brick1/wtg001p0r0
>> Brick2: xs138:/brick1/wtg001p0r1
>> Brick3: xs139:/brick1/wtg001p1r0
>> Brick4: xs140:/brick1/wtg001p1r1
>> Options Reconfigured:
>> nfs.disable: on
>> performance.readdir-ahead: on
>> [root at fs134 wtg001]# gluster volume info wtg002
>>
>> Volume Name: wtg002
>> Type: Distributed-Replicate
>> Volume ID: 410b67ad-bc1e-473b-b98f-ad431d7c9831
>> Status: Started
>> Number of Bricks: 2 x 3 = 6
>> Transport-type: tcp
>> Bricks:
>> Brick1: xs141:/brick1/wtg001d2r0
>> Brick2: xs138:/brick1/wtg001d2r1
>> Brick3: xs139:/brick1/wtg001d2ra
>> Brick4: xs139:/brick1/wtg001d3r0
>> Brick5: xs140:/brick1/wtg001d3r1
>> Brick6: xs141:/brick1/wtg001d3ra
>> Options Reconfigured:
>> nfs.disable: on
>> performance.readdir-ahead: on
>> [root at fs134 wtg001]# mount | grep wtg
>> 0:/wtg001 on /home/wtg001 type fuse.glusterfs
>> (rw,default_permissions,allow_other,max_read=131072)
>> 0:/wtg002 on /home/wtg002 type fuse.glusterfs
>> (rw,default_permissions,allow_other,max_read=131072)
>> [root at fs134 wtg001]#
>>
>>
>>
>>
>> _______________________________________________
>> Gluster-users mailing list
>> Gluster-users at gluster.org
>> http://www.gluster.org/mailman/listinfo/gluster-users
>
>
>
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20160405/1aaf3e2f/attachment.html>
More information about the Gluster-users
mailing list