[Gluster-devel] mainline-2.5 performance a known issue?

Dale Dude dale at oc3networks.com
Fri Jul 6 23:37:10 UTC 2007


Ignore the iocache in my client.vol. I tried with that too but the 
performance was so bad I didnt bother posting the test. iocache part 
should be iothreads which I didnt paste properly. Its in the same spot 
as the iocache.

volume iothreads
   type performance/io-threads
   option thread-count 10
   subvolumes iocache
end-volume

Dale Dude wrote:
> Dell 2950, sas, 4gig ram.
>
> /volume1 = local JFS mount
> /volumes = glusterfs mount over /volume1
> /mnt/test = local nfs mount over /volume1
>
> */volumes# tiotest -t 10*
> Tiotest results for 10 concurrent io threads:
> ,----------------------------------------------------------------------.
> | Item                  | Time     | Rate         | Usr CPU  | Sys CPU |
> +-----------------------+----------+--------------+----------+---------+
> | Write         100 MBs |    3.7 s |  27.301 MB/s |  13.7 %  | 120.1 % |
> | Random Write   39 MBs |    4.9 s |   7.938 MB/s |   0.0 %  |  48.4 % |
> | Read          100 MBs |    0.8 s | 124.976 MB/s |   0.0 %  | 356.2 % |
> | Random Read    39 MBs |    3.0 s |  13.191 MB/s |  10.1 %  |  72.9 % |
> `----------------------------------------------------------------------'
> Tiotest latency results:
> ,-------------------------------------------------------------------------. 
>
> | Item         | Average latency | Maximum latency | % >2 sec | % >10 
> sec |
> +--------------+-----------------+-----------------+----------+-----------+ 
>
> | Write        |        0.660 ms |       35.745 ms |  0.00000 |   
> 0.00000 |
> | Random Write |        1.835 ms |       28.604 ms |  0.00000 |   
> 0.00000 |
> | Read         |        0.276 ms |       73.803 ms |  0.00000 |   
> 0.00000 |
> | Random Read  |        2.650 ms |       43.676 ms |  0.00000 |   
> 0.00000 |
> |--------------+-----------------+-----------------+----------+-----------| 
>
> | Total        |        0.966 ms |       73.803 ms |  0.00000 |   
> 0.00000 |
> `--------------+-----------------+-----------------+----------+-----------' 
>
>
> =======================
>
> */volume1# tiotest -t 10 *
> Tiotest results for 10 concurrent io threads:
> ,----------------------------------------------------------------------.
> | Item                  | Time     | Rate         | Usr CPU  | Sys CPU |
> +-----------------------+----------+--------------+----------+---------+
> | Write         100 MBs |    3.0 s |  33.092 MB/s |   3.3 %  | 287.9 % |
> | Random Write   39 MBs |    2.5 s |  15.801 MB/s |   4.0 %  |  67.6 % |
> | Read          100 MBs |    0.1 s | 803.884 MB/s |  32.2 %  | 353.7 % |
> | Random Read    39 MBs |    0.0 s | 795.781 MB/s |   0.0 %  | 326.0 % |
> `----------------------------------------------------------------------'
> Tiotest latency results:
> ,-------------------------------------------------------------------------. 
>
> | Item         | Average latency | Maximum latency | % >2 sec | % >10 
> sec |
> +--------------+-----------------+-----------------+----------+-----------+ 
>
> | Write        |        0.064 ms |      151.144 ms |  0.00000 |   
> 0.00000 |
> | Random Write |        0.008 ms |        0.311 ms |  0.00000 |   
> 0.00000 |
> | Read         |        0.006 ms |        0.032 ms |  0.00000 |   
> 0.00000 |
> | Random Read  |        0.006 ms |        0.053 ms |  0.00000 |   
> 0.00000 |
> |--------------+-----------------+-----------------+----------+-----------| 
>
> | Total        |        0.027 ms |      151.144 ms |  0.00000 |   
> 0.00000 |
> `--------------+-----------------+-----------------+----------+-----------' 
>
>
> ======================
>
> */mnt/test# tiotest -t 10*
> Tiotest results for 10 concurrent io threads:
> ,----------------------------------------------------------------------.
> | Item                  | Time     | Rate         | Usr CPU  | Sys CPU |
> +-----------------------+----------+--------------+----------+---------+
> | Write         100 MBs |    0.8 s | 127.386 MB/s |   0.0 %  | 382.2 % |
> | Random Write   39 MBs |    0.8 s |  46.980 MB/s |   0.0 %  |  75.8 % |
> | Read          100 MBs |    0.3 s | 326.097 MB/s |  35.9 %  | 776.1 % |
> | Random Read    39 MBs |    0.1 s | 761.467 MB/s |   0.0 %  | 389.9 % |
> `----------------------------------------------------------------------'
> Tiotest latency results:
> ,-------------------------------------------------------------------------. 
>
> | Item         | Average latency | Maximum latency | % >2 sec | % >10 
> sec |
> +--------------+-----------------+-----------------+----------+-----------+ 
>
> | Write        |        0.134 ms |      123.788 ms |  0.00000 |   
> 0.00000 |
> | Random Write |        0.342 ms |       89.255 ms |  0.00000 |   
> 0.00000 |
> | Read         |        0.092 ms |      155.992 ms |  0.00000 |   
> 0.00000 |
> | Random Read  |        0.006 ms |        0.060 ms |  0.00000 |   
> 0.00000 |
> |--------------+-----------------+-----------------+----------+-----------| 
>
> | Total        |        0.130 ms |      155.992 ms |  0.00000 |   
> 0.00000 |
> `--------------+-----------------+-----------------+----------+-----------' 
>
>
> ==========================
>
> client.vol:
> volume server1
>         type protocol/client
>         option transport-type tcp/client     # for TCP/IP transport
>         option remote-host 127.0.0.1     # IP address of the remote brick
>         option remote-subvolume volumenamespace
> end-volume
>
> volume server1vol1
>         type protocol/client
>         option transport-type tcp/client     # for TCP/IP transport
>         option remote-host 127.0.0.1     # IP address of the remote brick
>         option remote-subvolume clusterfs1
> end-volume
>
> ###################
>
> volume bricks
>  type cluster/unify
>  option namespace server1
>  option readdir-force-success on  # ignore failed mounts
>  subvolumes server1vol1
>
>  option scheduler rr
>  option rr.limits.min-free-disk 5 #%
> end-volume
>
> volume writebehind   #writebehind improves write performance a lot
>  type performance/write-behind
>  option aggregate-size 131072 # in bytes
>  subvolumes bricks
> end-volume
>
>
> volume readahead
>  type performance/read-ahead
>  option page-size 65536     # unit in bytes
>  option page-count 16       # cache per file  = (page-count x page-size)
>  subvolumes writebehind
> end-volume
>
> volume iocache
>  type performance/io-cache
>  option page-size 128KB
>  option page-count 128
>  subvolumes readahead
> end-volume
>
> ============================
> server.vol:
> volume volume1
>  type storage/posix
>  option directory /volume1
> end-volume
>
> volume clusterfs1
>   type performance/io-threads
>   option thread-count 10
>   subvolumes volume1
> end-volume
>
> #######
>
> volume volumenamespace
>  type storage/posix
>  option directory /volume.namespace
> end-volume
>
> ###
>
> volume clusterfs
>  type protocol/server
>  option transport-type tcp/server
>  subvolumes clusterfs1 volumenamespace
>  option auth.ip.clusterfs1.allow *
>  option auth.ip.volumenamespace.allow *
> end-volume
> _______________________________________________
> Gluster-devel mailing list
> Gluster-devel at nongnu.org
> http://lists.nongnu.org/mailman/listinfo/gluster-devel
>





More information about the Gluster-devel mailing list