[Gluster-users] Gluster samba vfs read performance slow
kane
stef_9k at 163.com
Wed Sep 18 05:46:09 UTC 2013
I have already used "kernel oplocks = no" in the smb.conf, next is my original smb.conf file global settings:
[global]
workgroup = MYGROUP
server string = DCS Samba Server
log file = /var/log/samba/log.vfs
max log size = 500000
aio read size = 262144
aio write size = 262144
aio write behind = true
security = user
passdb backend = tdbsam
load printers = yes
cups options = raw
read raw = yes
write raw = yes
max xmit = 262144
socket options = TCP_NODELAY IPTOS_LOWDELAY SO_RCVBUF=262144 SO_SNDBUF=262144
# max protocol = SMB2
kernel oplocks = no
stat cache = no
thank you
-Kane
在 2013-9-18,下午1:38,Anand Avati <avati at redhat.com> 写道:
> On 9/17/13 10:34 PM, kane wrote:
>> Hi Anand,
>>
>> I use 2 gluster server , this is my volume info:
>> Volume Name: soul
>> Type: Distribute
>> Volume ID: 58f049d0-a38a-4ebe-94c0-086d492bdfa6
>> Status: Started
>> Number of Bricks: 2
>> Transport-type: tcp
>> Bricks:
>> Brick1: 192.168.101.133:/dcsdata/d0
>> Brick2: 192.168.101.134:/dcsdata/d0
>>
>> each brick use a raid 5 logic disk with 8*2TSATA hdd.
>>
>> smb.conf:
>> [gvol]
>> comment = For samba export of volume test
>> vfs objects = glusterfs
>> glusterfs:volfile_server = localhost
>> glusterfs:volume = soul
>> path = /
>> read only = no
>> guest ok = yes
>>
>> this my testparm result:
>> [global]
>> workgroup = MYGROUP
>> server string = DCS Samba Server
>> log file = /var/log/samba/log.vfs
>> max log size = 500000
>> max xmit = 262144
>> socket options = TCP_NODELAY IPTOS_LOWDELAY SO_RCVBUF=262144
>> SO_SNDBUF=262144
>> stat cache = No
>> kernel oplocks = No
>> idmap config * : backend = tdb
>> aio read size = 262144
>> aio write size = 262144
>> aio write behind = true
>> cups options = raw
>>
>> in client mount the smb share with cifs to dir /mnt/vfs,
>> then use iozone executed in the cifs mount dir "/mnt/vfs":
>> $ ./iozone -s 10G -r 128k -i0 -i1 -t 4
>> File size set to 10485760 KB
>> Record Size 128 KB
>> Command line used: ./iozone -s 10G -r 128k -i0 -i1 -t 4
>> Output is in Kbytes/sec
>> Time Resolution = 0.000001 seconds.
>> Processor cache size set to 1024 Kbytes.
>> Processor cache line size set to 32 bytes.
>> File stride size set to 17 * record size.
>> Throughput test with 4 processes
>> Each process writes a 10485760 Kbyte file in 128 Kbyte records
>>
>> Children see throughput for 4 initial writers = 534315.84 KB/sec
>> Parent sees throughput for 4 initial writers = 519428.83 KB/sec
>> Min throughput per process = 133154.69 KB/sec
>> Max throughput per process = 134341.05 KB/sec
>> Avg throughput per process = 133578.96 KB/sec
>> Min xfer = 10391296.00 KB
>>
>> Children see throughput for 4 rewriters = 536634.88 KB/sec
>> Parent sees throughput for 4 rewriters = 522618.54 KB/sec
>> Min throughput per process = 133408.80 KB/sec
>> Max throughput per process = 134721.36 KB/sec
>> Avg throughput per process = 134158.72 KB/sec
>> Min xfer = 10384384.00 KB
>>
>> Children see throughput for 4 readers = 77403.54 KB/sec
>> Parent sees throughput for 4 readers = 77402.86 KB/sec
>> Min throughput per process = 19349.42 KB/sec
>> Max throughput per process = 19353.42 KB/sec
>> Avg throughput per process = 19350.88 KB/sec
>> Min xfer = 10483712.00 KB
>>
>> Children see throughput for 4 re-readers = 77424.40 KB/sec
>> Parent sees throughput for 4 re-readers = 77423.89 KB/sec
>> Min throughput per process = 19354.75 KB/sec
>> Max throughput per process = 19358.50 KB/sec
>> Avg throughput per process = 19356.10 KB/sec
>> Min xfer = 10483840.00 KB
>>
>> then the use the same command test in the dir mounted with glister fuse:
>> File size set to 10485760 KB
>> Record Size 128 KB
>> Command line used: ./iozone -s 10G -r 128k -i0 -i1 -t 4
>> Output is in Kbytes/sec
>> Time Resolution = 0.000001 seconds.
>> Processor cache size set to 1024 Kbytes.
>> Processor cache line size set to 32 bytes.
>> File stride size set to 17 * record size.
>> Throughput test with 4 processes
>> Each process writes a 10485760 Kbyte file in 128 Kbyte records
>>
>> Children see throughput for 4 initial writers = 887534.72 KB/sec
>> Parent sees throughput for 4 initial writers = 848830.39 KB/sec
>> Min throughput per process = 220140.91 KB/sec
>> Max throughput per process = 223690.45 KB/sec
>> Avg throughput per process = 221883.68 KB/sec
>> Min xfer = 10319360.00 KB
>>
>> Children see throughput for 4 rewriters = 892774.92 KB/sec
>> Parent sees throughput for 4 rewriters = 871186.83 KB/sec
>> Min throughput per process = 222326.44 KB/sec
>> Max throughput per process = 223970.17 KB/sec
>> Avg throughput per process = 223193.73 KB/sec
>> Min xfer = 10431360.00 KB
>>
>> Children see throughput for 4 readers = 605889.12 KB/sec
>> Parent sees throughput for 4 readers = 601767.96 KB/sec
>> Min throughput per process = 143133.14 KB/sec
>> Max throughput per process = 159550.88 KB/sec
>> Avg throughput per process = 151472.28 KB/sec
>> Min xfer = 9406848.00 KB
>>
>> it shows much higher perf.
>>
>> any places i did wrong?
>>
>>
>> thank you
>> -Kane
>>
>> 在 2013-9-18,下午1:19,Anand Avati <avati at gluster.org
>> <mailto:avati at gluster.org>> 写道:
>>
>>> How are you testing this? What tool are you using?
>>>
>>> Avati
>>>
>>>
>>> On Tue, Sep 17, 2013 at 9:02 PM, kane <stef_9k at 163.com
>>> <mailto:stef_9k at 163.com>> wrote:
>>>
>>> Hi Vijay
>>>
>>> I used the code in
>>> https://github.com/gluster/glusterfs.git with the lasted commit:
>>> commit de2a8d303311bd600cb93a775bc79a0edea1ee1a
>>> Author: Anand Avati <avati at redhat.com <mailto:avati at redhat.com>>
>>> Date: Tue Sep 17 16:45:03 2013 -0700
>>>
>>> Revert "cluster/distribute: Rebalance should also verify free
>>> inodes"
>>>
>>> This reverts commit 215fea41a96479312a5ab8783c13b30ab9fe00fa
>>>
>>> Realized soon after merging, ….
>>>
>>> which include the patch you mentioned last time improve read perf,
>>> written by Anand.
>>>
>>> but the read perf was still slow:
>>> write: 500MB/s
>>> read: 77MB/s
>>>
>>> while via fuse :
>>> write 800MB/s
>>> read 600MB/s
>>>
>>> any advises?
>>>
>>>
>>> Thank you.
>>> -Kane
>>>
>>> 在 2013-9-13,下午10:37,kane <stef_9k at 163.com
>>> <mailto:stef_9k at 163.com>> 写道:
>>>
>>>> Hi Vijay,
>>>>
>>>> thank you for post this message, i will try it soon
>>>>
>>>> -kane
>>>>
>>>>
>>>>
>>>> 在 2013-9-13,下午9:21,Vijay Bellur <vbellur at redhat.com
>>> <mailto:vbellur at redhat.com>> 写道:
>>>>
>>>>> On 09/13/2013 06:10 PM, kane wrote:
>>>>>> Hi
>>>>>>
>>>>>> We use gluster samba vfs test io,but the read performance via
>>> vfs is
>>>>>> half of write perfomance,
>>>>>> but via fuse the read and write performance is almost the same.
>>>>>>
>>>>>> this is our smb.conf:
>>>>>> [global]
>>>>>> workgroup = MYGROUP
>>>>>> server string = DCS Samba Server
>>>>>> log file = /var/log/samba/log.vfs
>>>>>> max log size = 500000
>>>>>> # use sendfile = true
>>>>>> aio read size = 262144
>>>>>> aio write size = 262144
>>>>>> aio write behind = true
>>>>>> min receivefile size = 262144
>>>>>> write cache size = 268435456
>>>>>> security = user
>>>>>> passdb backend = tdbsam
>>>>>> load printers = yes
>>>>>> cups options = raw
>>>>>> read raw = yes
>>>>>> write raw = yes
>>>>>> max xmit = 262144
>>>>>> socket options = TCP_NODELAY IPTOS_LOWDELAY
>>> SO_RCVBUF=262144
>>>>>> SO_SNDBUF=262144
>>>>>> kernel oplocks = no
>>>>>> stat cache = no
>>>>>>
>>>>>> any advises helpful?
>>>>>>
>>>>>
>>>>> This patch has shown improvement in read performance with libgfapi:
>>>>>
>>>>> http://review.gluster.org/#/c/5897/
>>>>>
>>>>> Would it be possible for you to try this patch and check if it
>>> improves performance in your case?
>>>>>
>>>>> -Vijay
>>>>>
>>>>
>>>
>>>
>>> _______________________________________________
>>> Gluster-users mailing list
>>> Gluster-users at gluster.org <mailto:Gluster-users at gluster.org>
>>> http://supercolony.gluster.org/mailman/listinfo/gluster-users
>>>
>>>
>>
>
> Please add 'kernel oplocks = no' in the [gvol] section and try again.
>
> Avati
>
More information about the Gluster-users
mailing list