[Gluster-devel] how best to set up for performance?

Amar S. Tumballi amar at zresearch.com
Sun Mar 16 07:12:52 UTC 2008


Hey,
 Just missed that 80GB file size part. Are you sure your disks are fast
enough to write/read at more than 200MBps for uncached files? Can you run
the dd directly on the backend and make sure you are getting enough disk
speed?

Regards,
Amar

On Sat, Mar 15, 2008 at 7:16 PM, Niall Dalton <nialldalton at mac.com> wrote:

> Hi Amar,
>
> This is certainly an improvement -  thank you for the suggestions.
>
>
> > Can you try with these spec files and let me know the results? Also,
> > my doubt is if you have 10Gig/E, how will you get 1.5GBps from
> > single client?
>
>
> I have 2 distinct 10gige interfaces in the client machine, one for
> each storage server. Traffic over one should, in theory, have zero
> effect on traffic on the other. Though of course we could see some
> interference somewhere within the OS on the client machine. To your
> definitions I needed to add definitions for readahead-jr1 and
> readahead-jr2, so I used the same settings
> as you had used for the readahead volume.
>
> volume readahead-jr1
>    type performance/read-ahead
>   option page-size 1MB
>   option page-count 2
>    subvolumes jr1
> end-volume
>
> volume readahead-jr2
>    type performance/read-ahead
>   option page-size 1MB
>   option page-count 2
>    subvolumes jr2
> end-volume
>
> This gives the more respectable performance:
>
> root at caneland:/etc/glusterfs# dd if=/dev/zero of=/mnt/stripe/big.file
> bs=1M count=80000
> 80000+0 records in
> 80000+0 records out
> 83886080000 bytes (84 GB) copied, 210.761 seconds, 398 MB/s
>
> root at caneland:/etc/glusterfs# dd if=/mnt/stripe/big.file of=/dev/null
> bs=1M
> 80000+0 records in
> 80000+0 records out
> 83886080000 bytes (84 GB) copied, 206.691 seconds, 406 MB/s
>
> There should still be plenty of headroom here - this is 200MB per
> server out of perhaps 700MB/s. Pardon the repetition, but I include
> the full specs below for completeness.
>
> niall
>
>
> server 1:
>
> volume posix
>   type storage/posix
>   option directory /big
> end-volume
>
> volume brick
>   type performance/io-threads
>   option thread-count 8
>   option cache-size 4096MB
>   subvolumes posix
> end-volume
>
> volume server
>   type protocol/server
>   subvolumes brick
>   option transport-type tcp/server     # For TCP/IP transport
>   option auth.ip.brick.allow *
>   subvolumes brick
> end-volume
>
>
> server 2:
>
>
> volume posix
>   type storage/posix
>   option directory /big
> end-volume
>
> volume brick
>   type performance/io-threads
>   option thread-count 8
>   option cache-size 4096MB
>   subvolumes posix
> end-volume
>
> volume server
>   type protocol/server
>   subvolumes brick
>   option transport-type tcp/server     # For TCP/IP transport
>   option auth.ip.brick.allow *
>   subvolumes brick
> end-volume
>
>
> client:
>
> volume jr1
>   type protocol/client
>   option transport-type tcp/client
>   option remote-host 192.168.3.2
>   option remote-subvolume brick
> end-volume
>
> volume jr2
>   type protocol/client
>   option transport-type tcp/client
>   option remote-host 192.168.2.2
>   option remote-subvolume brick
> end-volume
>
> volume readahead-jr1
>    type performance/read-ahead
>   option page-size 1MB
>   option page-count 2
>    subvolumes jr1
> end-volume
>
> volume readahead-jr2
>    type performance/read-ahead
>   option page-size 1MB
>   option page-count 2
>    subvolumes jr2
> end-volume
>
> volume stripe0
>   type cluster/stripe
>   option block-size *:1MB
>   subvolumes readahead-jr1 readahead-jr2
> end-volume
>
> volume iot
>  type performance/io-threads
>  subvolumes stripe0
> end-volume
>
> volume writebehind
>   type performance/write-behind
>   subvolumes iot
> end-volume
>
> volume readahead
>   type performance/read-ahead
>   option page-size 1MB
>   option page-count 2
>   subvolumes writebehind
> end-volume
>
>
>


-- 
Amar Tumballi
Gluster/GlusterFS Hacker
[bulde on #gluster/irc.gnu.org]
http://www.zresearch.com - Commoditizing Supercomputing and Superstorage!



More information about the Gluster-devel mailing list