[Gluster-users] Running Gluster client/server on single process

Michiel van Es mve at pcintelligence.nl
Wed May 19 19:11:33 UTC 2010


Hi,

Would my setup bennefit from the next release too?
I am using CentOS 5.5 with 32 bit gluster and fuse patch on 2 servers 
with 1 GB memory and 2 Xeon cpu's but maildir performance is so slow.
Sending an email is like watching 10 seconds to my Thunderbird saying: 
'Copying message to sent folder'
The strange thing is that the email sent to myself, is already in my 
inbox (read) but the writing of a plain text file to the Sent folder is 
taking ages.
I tried the default gen script settings, some versions on some howto 
pages but none really helped making my write performance faster.
Btw I am on 2 VPS systems with kernel 2.6.31 per compiled by the hosting 
party (with fuse compiled in).

I am looking forward for a faster small file performance hit (thinking 
that that is the issue with my setup).

Regards,

Michiel

On 5/19/10 3:06 PM, Tejas N. Bhise wrote:
> Roberto,
>
> We recently made some code changes we think will considerably help small file performance -
>
> selective readdirp - http://patches.gluster.com/patch/3203/
> dht lookup revalidation optimization - http://patches.gluster.com/patch/3204/
> updated write-behind default values - http://patches.gluster.com/patch/3223/
>
> These are tentatively scheduled to go into 3.0.5.
> If its possible for you, I would suggest you test them in a non-production environment
> and see if it  helps with distribute config itself.
>
> Please do not use in production, for that wait for the release which these patches go in.
>
> Do let me know if you have any questions about this.
>
> Regards,
> Tejas.
>
>
> ----- Original Message -----
> From: "Roberto Franchini"<ro.franchini at gmail.com>
> To: "gluster-users"<gluster-users at gluster.org>
> Sent: Wednesday, May 19, 2010 5:29:47 PM
> Subject: Re: [Gluster-users] Running Gluster client/server on single process
>
> On Sat, May 15, 2010 at 10:06 PM, Craig Carl<craig at gluster.com>  wrote:
>> Robert -
>>        NUFA has been deprecated and doesn't apply to any recent version of
>> Gluster. What version are you running? ('glusterfs --version')
>
> We run 3.0.4 on ubuntu 9.10 and 10.04 server.
> Is there a way to mimic NUFA behaviour?
>
> We are using gluster to store Lucene indexes. Indexes are created
> locally from milions of small files and then copied to the storage.
> I tried read this little files from gluster but was too slow.
> So maybe a NUFA way, e.g. prefer local disk for read, could improve performance.
> Let me know :)
>
> At the moment we use dht/replicate:
>
>
> #CLIENT
>
> volume remote1
>   type protocol/client
>   option transport-type tcp
>   option remote-host zeus
>   option remote-subvolume brick
> end-volume
>
> volume remote2
>   type protocol/client
>   option transport-type tcp
>   option remote-host hera
>   option remote-subvolume brick
> end-volume
>
> volume remote3
>   type protocol/client
>   option transport-type tcp
>   option remote-host apollo
>   option remote-subvolume brick
> end-volume
>
> volume remote4
>   type protocol/client
>   option transport-type tcp
>   option remote-host demetra
>   option remote-subvolume brick
> end-volume
>
> volume remote5
>   type protocol/client
>   option transport-type tcp
>   option remote-host ade
>   option remote-subvolume brick
> end-volume
>
> volume remote6
>   type protocol/client
>   option transport-type tcp
>   option remote-host athena
>   option remote-subvolume brick
> end-volume
>
> volume replicate1
>   type cluster/replicate
>   subvolumes remote1 remote2
> end-volume
>
> volume replicate2
>   type cluster/replicate
>   subvolumes remote3 remote4
> end-volume
>
> volume replicate3
>   type cluster/replicate
>   subvolumes remote5 remote6
> end-volume
>
> volume distribute
>   type cluster/distribute
>   subvolumes replicate1 replicate2 replicate3
> end-volume
>
> volume writebehind
>   type performance/write-behind
>   option window-size 1MB
>   subvolumes distribute
> end-volume
>
> volume quickread
>   type performance/quick-read
>   option cache-timeout 1         # default 1 second
> #  option max-file-size 256KB        # default 64Kb
>   subvolumes writebehind
> end-volume
>
> ### Add io-threads for parallel requisitions
> volume iothreads
>   type performance/io-threads
>   option thread-count 16 # default is 16
>   subvolumes quickread
> end-volume
>
>
> #SERVER
>
> volume posix
>   type storage/posix
>   option directory /data/export
> end-volume
>
> volume locks
>   type features/locks
>   subvolumes posix
> end-volume
>
> volume brick
>   type performance/io-threads
>   option thread-count 8
>   subvolumes locks
> end-volume
>
> volume server
>   type protocol/server
>   option transport-type tcp
>   option auth.addr.brick.allow *
>   subvolumes brick
> end-volume



More information about the Gluster-users mailing list