[Gluster-users] GlusterFS Vs NFS Read Performance confusing

mohan L l.mohanphy at gmail.com
Fri Jan 2 07:10:09 UTC 2009


we're conducting performance  benchmark runs to evaluate Linux performance
as NFS file servers.
It is observed that an unusual high percentage of benchmark time was spent
in "read" operation.
A sampled workload consisting of 18% of read consumes 63% of total benchmark
time. Did this
problem get analyzed before (or even better :)-is there a patch) ? We're on
2.4.19 kernel- NFS
V3 - UDP, with EXT3 as local file system.

Thanks in advance.

gluster-users at gluster.org

Dear All,

we are currently using NFS to meet data sharing requirements.Now we are
facing  some performance and scalability problem ,so this form does not meet
the requirements of our network(performance).So we are finding the possible
solutions to increase the performance and scalability .To give very strong
solution to NFS issue I have analysed two File System one is GlusterFS and
another one is Red Hat GFS.we conclude that GlusterFS will increase the
performance and scalability ,It has all the features we are looking .For the
testing purpose I am benchmarking NFS and GlusterFS to get better
performance .My benchmark result shows that GlusterFS give better
performance ,but i am getting some unacceptable read performance . I am not
able to understand how exactly the read operation performs NFS and GlusterFS
.even I don't know anything i am doing wrong.here i am showing the benchmark
result to get better idea of my read performance issuee .i have attached the
result of NFS and GlusterFS read  values .any one can please go thro this
and give me some valuable guide .It will make my benchmarking very effective
.

This my server and client Hardware and software :

HARDWARE CONFIG:

Processor core speed  : Intel(R) Celeron(R) CPU 1.70GHz

Number of cores  : Single Core (not dual-core)

RAM size  : 384MB(128MB+256MB)

RAM type  : DDR

RAM Speed  : 266 MHz (3.8 ns)

Swap  : 1027MB

Storage controller  : ATA device

Disk model/size  : SAMSUNG SV4012H /40 GB,2 MB Cache,

Storage speed  : 52.4 MB/sec

Spindle Speed  : 5400 rpm(Revolution per Minute)

NIC Type  : VIA Rhine III chipset IRQ 18

NIC Speed  : 100 Mbps/Full-Duplex Card

SOFTWARE:

Operation System : Fedora Core 9 GNU/Linux

Linux version  : 2.6.9-42

Local FS  : Ext3

NFS version  : 1.1.2

GlusterFS version: glusterfs 1.3.8 built on Feb 3 2008

Iozone  : iozone-3-5.fc9.i386 (File System Benchmark Tool)

ttcp  : ttcp-1.12-18.fc9.i386(RAW throughput measurement Tool)

This is the server and client vol files i am using the benchmarking

#GlusterFS Server Volume Specification

volume brick
  type storage/posix                   # POSIX FS translator
  option directory /bench        # /bench dir contains 25,000 files with
size 10 KB 15KB
end-volume

volume iot
  type performance/io-threads
  option thread-count 4
  option cache-size 8MB
  subvolumes brick
end-volume

volume server
  type protocol/server
  option transport-type tcp/server
  subvolumes iot
  option auth.ip.brick.allow * # Allow access to "brick" volume
end-volume



# GlusterFS Client Volume Specification

volume client
  type protocol/client
  option transport-type tcp/client
  option remote-host 192.xxx.x.xxx
  option remote-subvolume brick
end-volume

volume readahead
  type performance/read-ahead
  option page-size 128KB     # 256KB is the default option
  option page-count 4     # cache per file  = (page-count x page-size)  2 is
default option
  subvolumes client
end-volume

volume iocache
  type performance/io-cache
  #option page-size 128KB   ## default is 32MB
  option cache-size 256MB  #128KB is default option
  option page-count 4
  subvolumes readahead
end-volume

volume writeback
  type performance/write-behind
  option aggregate-size 128KB
  option flush-behind on
  subvolumes iocache
end-volume


I am confusing this result .I don't have idea how to trace and get good
comparable result is read performance .I think I am miss understanding the
buffer cache concepts .

>From attached NFS read result , I understand that I have 348MB RAM  and I am
doing the benchmark file size rage 128KB to 1GB .So up to file size 256MB I
am getting  buffer cache performance and file size 512MB ,1GB I am getting
with in  link speed .But in case of GlusterFS I not able to understand what
is happening .

Please any one can help me .

NFS :
iozone -Raceb ./perffinal.wks -y 4K -q 128K -n 128K -g 1G -i 0 -i 1


Reader
           4              8            16            32
64           128

128    744701    727625    935039    633768    499971    391433
256    920892    1085148    1057519    931149    551834    380335
512    937558    1075517    1100810    904515    558917    368605
1024    974395    1072149    1094105    969724    555319    379390
2048    1026059    1125318    1137073    1005356    568252    375232
4096    1021220    1144780    1169589    1030467    578615    376367
8192    965366    1153315    1071693    1072681    607040    371771
16384    1008989    1133837    1163806    1046171    600500    376056
32768    1022692    1165701    1175739    1065870    630626    363563
65536    1005490    1152909    1168181    1048258    631148    374343
131072    1011405    1161491    1176534    1048509    637910    375741
262144    1011217    1130486    1118877    1075740    636433    375511
524288    9563    9562    9568    9551    9525    9562
1048576    9499    9520    9513    9535    9493    9469

GlusterFS:
iozone -Raceb /root/glusterfs/perfgfs2.wks -y 4K -q 128K -n 128K -g 1G -i 0
-i 1
Reader Report
            4           8            16         32           64         128

128    48834    50395    49785    48593    48450    47959
256    15276    15209    15210    15100    14998    14973
512    12343    12333    12340    12291    12202    12213
1024    11330    11334    11327    11303    11276    11283
2048    10875    10881    10877    10873    10857    10865
4096    10671    10670    9706    10673    9685    10640
8192    10572    10060    10571    10573    10555    10064
16384    10522    10523    10523    10522    10522    10263
32768    10494    10497    10495    10493    10497    10497
65536    10484    10483    10419    10483    10485    10485
131072    10419    10475    10477    10445    10445    10478
262144    10323    10241    10312    10226    10320    10237
524288    10074    9966    9707    8567    8213    9046
1048576    7440    7973    5737    7101    7678    5743

Any idea for this higher value in NFS test .some this is different . But I
am not able to understand.





Thanks for your time
Mohan
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20090102/3f9e5a0a/attachment.html>


More information about the Gluster-users mailing list