[Gluster-devel] How to increase a throughput

Jake Maul jakemaul at gmail.com
Tue Oct 28 07:44:29 UTC 2008


Problem is that's a more or less impossible question to answer. 1:1,
1:4, 1:100... there's no way to just "know"... if there was, it'd
already be documented :).

Take a look at "iostats -dkx 30" and "iptraf" on the server. If the
util% is high, or the network bandwidth is near the max of your
interface, then there's your answer... beef up the server or add
another one. Disk usage% will normally fall if you add RAM, as Linux
will keep more things cached. You can even check the iowait% shown in
"top" to get an idea of disk utilization. If you have 8GB of RAM and
7.5GB is being used for cache, but you still have a high iowait%...
add more RAM. Faster drives or more servers would also work.

If the server's interface is maxxed out, adding clients won't help
obviously, nor will adding RAM to the server... either add a server or
give it a bigger pipe, or add more iocache space on the clients.

If server disk usage% is low, and network bandwidth utilization is
low, and/or you have a lot of free RAM on the server (unlikely if it's
really serving a lot of content... Linux will cache aggressively until
out of RAM, then free up space from the cache when needed), then you
probably don't have a server limitation, and probably won't benefit
much from adding another one. Check your clients for things inhibiting
their throughput... CPU usage, network usage, memory usage, etc.

You probably do want io-threads on the server-side, if you haven't
already... I just see your client-side config here. It's good on the
clients sometimes, but it works even better server-side.

Jake

On Mon, Oct 27, 2008 at 8:15 PM, Ben Mok <benmok at powerallnetworks.com> wrote:
> Hi Vikas,
>
> I have already loaded performance translators, if I want to increase nodes
> to improve the performance, client or server nodes you suggest to increase?
> Thx !
> Ben
>
> For Client side:
> volume iothreads
>           type performance/io-threads
>          option thread-count 8
>           option cache-size 128MB
>           subvolumes storage-unify
>   end-volume
>
>   volume readahead
>        type performance/read-ahead
>        option page-size 128kb ### in bytes
>        option page-count 64 ### memory cache size is page-count x page-size
> per file
>        subvolumes iothreads
>    end-volume
>
>   volume writebehind
>         type performance/write-behind
>         option aggregate-size 131072 # in bytes
>         option flush-behind on
>         subvolumes readahead
>   end-volume
>
>   volume io-cache
>        type performance/io-cache
>        option cache-size 512MB             # default is 32MB
>        option page-size 256KB               #128KB is default option
>        option force-revalidate-timeout 7200  # default is 1
>        subvolumes writebehind
>   end-volume
>
>
> -----Original Message-----
> From: vikasgp at gmail.com [mailto:vikasgp at gmail.com] On Behalf Of Vikas Gorur
> Sent: Monday, October 27, 2008 7:34 PM
> To: Ben Mok
> Cc: gluster-devel at nongnu.org
> Subject: Re: [Gluster-devel] How to increase a throughput
>
> 2008/10/27 Ben Mok <benmok at powerallnetworks.com>:
>> Hi ,
>>
>> I am using GlusterFS-1.3.12, fuse-2.7.3glfs10 , I have four servers and
> four
>> clients, what do you suggest to improve the throughput of them, increase
>> number of clients or servers?
>
> You can try loading the following performance translators and see if it
> helps:
>
> * io-threads: load this just above your storage/posix translators on
>  the servers
>
> * read-ahead: if your applications mainly make sequential reads, loading
>  this translator on the clients will increase throughput.
>
> * write-behind: if your applications are write-intensive, loading this
>  translator on the client side may help performance.
>
> Vikas Gorur
> --
> Engineer - Z Research
> http://gluster.org/
>
>
>
>
> _______________________________________________
> Gluster-devel mailing list
> Gluster-devel at nongnu.org
> http://lists.nongnu.org/mailman/listinfo/gluster-devel
>





More information about the Gluster-devel mailing list