[Gluster-users] [ovirt-users] ovirt glusterfs performance
Bill James
bill.james at j2.com
Thu Feb 11 18:27:44 UTC 2016
thank you for the reply.
We setup gluster using the names associated with NIC 2 IP.
Brick1: ovirt1-ks.test.j2noc.com:/gluster-store/brick1/gv1
Brick2: ovirt2-ks.test.j2noc.com:/gluster-store/brick1/gv1
Brick3: ovirt3-ks.test.j2noc.com:/gluster-store/brick1/gv1
That's NIC 2's IP.
Using 'iftop -i eno2 -L 5 -t' :
dd if=/dev/zero of=/root/testfile bs=1M count=1000 oflag=direct
1048576000 bytes (1.0 GB) copied, 68.0714 s, 15.4 MB/s
Peak rate (sent/received/total): 281Mb 5.36Mb
282Mb
Cumulative (sent/received/total): 1.96GB 14.6MB
1.97GB
gluster volume info gv1:
Options Reconfigured:
performance.write-behind-window-size: 4MB
performance.readdir-ahead: on
performance.cache-size: 1GB
performance.write-behind: off
performance.write-behind: off didn't help.
Neither did any other changes I've tried.
There is no VM traffic on this VM right now except my test.
On 02/10/2016 11:55 PM, Nir Soffer wrote:
> On Thu, Feb 11, 2016 at 2:42 AM, Ravishankar N <ravishankar at redhat.com> wrote:
>> +gluster-users
>>
>> Does disabling 'performance.write-behind' give a better throughput?
>>
>>
>>
>> On 02/10/2016 11:06 PM, Bill James wrote:
>>> I'm setting up a ovirt cluster using glusterfs and noticing not stellar
>>> performance.
>>> Maybe my setup could use some adjustments?
>>>
>>> 3 hardware nodes running centos7.2, glusterfs 3.7.6.1, ovirt 3.6.2.6-1.
>>> Each node has 8 spindles configured in 1 array which is split using LVM
>>> with one logical volume for system and one for gluster.
>>> They each have 4 NICs,
>>> NIC1 = ovirtmgmt
>>> NIC2 = gluster (1GbE)
> How do you ensure that gluster trafic is using this nic?
>
>>> NIC3 = VM traffic
> How do you ensure that vm trafic is using this nic?
>
>>> I tried with default glusterfs settings
> And did you find any difference?
>
>>> and also with:
>>> performance.cache-size: 1GB
>>> performance.readdir-ahead: on
>>> performance.write-behind-window-size: 4MB
>>>
>>> [root at ovirt3 test scripts]# gluster volume info gv1
>>>
>>> Volume Name: gv1
>>> Type: Replicate
>>> Volume ID: 71afc35b-09d7-4384-ab22-57d032a0f1a2
>>> Status: Started
>>> Number of Bricks: 1 x 3 = 3
>>> Transport-type: tcp
>>> Bricks:
>>> Brick1: ovirt1-ks.test.j2noc.com:/gluster-store/brick1/gv1
>>> Brick2: ovirt2-ks.test.j2noc.com:/gluster-store/brick1/gv1
>>> Brick3: ovirt3-ks.test.j2noc.com:/gluster-store/brick1/gv1
>>> Options Reconfigured:
>>> performance.cache-size: 1GB
>>> performance.readdir-ahead: on
>>> performance.write-behind-window-size: 4MB
>>>
>>>
>>> Using simple dd test on VM in ovirt:
>>> dd if=/dev/zero of=/root/testfile bs=1G count=1 oflag=direct
> block size of 1G?!
>
> Try 1M (our default for storage operations)
>
>>> 1073741824 bytes (1.1 GB) copied, 65.9337 s, 16.3 MB/s
>>>
>>> Another VM not in ovirt using nfs:
>>> dd if=/dev/zero of=/root/testfile bs=1G count=1 oflag=direct
>>> 1073741824 bytes (1.1 GB) copied, 27.0079 s, 39.8 MB/s
>>>
>>>
>>> Is that expected or is there a better way to set it up to get better
>>> performance?
> Adding Niels for advice.
>
>>> This email, its contents and ....
> Please avoid this, this is a public mailing list, everything you write
> here is public.
>
> Nir
I'll have to look into how to remove this sig for this mailing list....
Cloud Services for Business www.j2.com
j2 | eFax | eVoice | FuseMail | Campaigner | KeepItSafe | Onebox
This email, its contents and attachments contain information from j2 Global, Inc. and/or its affiliates which may be privileged, confidential or otherwise protected from disclosure. The information is intended to be for the addressee(s) only. If you are not an addressee, any disclosure, copy, distribution, or use of the contents of this message is prohibited. If you have received this email in error please notify the sender by reply e-mail and delete the original message and any copies. (c) 2015 j2 Global, Inc. All rights reserved. eFax, eVoice, Campaigner, FuseMail, KeepItSafe, and Onebox are registered trademarks of j2 Global, Inc. and its affiliates.
More information about the Gluster-users
mailing list