[Gluster-users] Running Gluster client/server on single process

Tejas N. Bhise tejas at gluster.com
Sun Jun 20 17:59:13 UTC 2010


Hi Bryan,

3.0.5 should be out soon. If you want to do some testing before it's officially out, you can try the latest release candidate. You don't need to patch at this stage. Let me know if you know how to get the release candidates and use them.

Regards,
Tejas.

----- Original Message -----
From: "Bryan McGuire" <bmcguire at newnet66.org>
To: "Tejas N. Bhise" <tejas at gluster.com>
Cc: "gluster-users" <gluster-users at gluster.org>
Sent: Sunday, June 20, 2010 7:46:55 PM
Subject: Re: [Gluster-users] Running Gluster client/server on single process

Tejas,

Any idea when 3.0.5 will be released? I am very anxious for these  
patches to be in production.

On another note, I am very new to Gluster let alone Linux. Could you,  
or someone else, give me some guidance (How to) in applying the  
patches. I would like to test them for now?

Bryan McGuire


On May 19, 2010, at 8:06 AM, Tejas N. Bhise wrote:

> Roberto,
>
> We recently made some code changes we think will considerably help  
> small file performance -
>
> selective readdirp - http://patches.gluster.com/patch/3203/
> dht lookup revalidation optimization - http://patches.gluster.com/patch/3204/
> updated write-behind default values - http://patches.gluster.com/patch/3223/
>
> These are tentatively scheduled to go into 3.0.5.
> If its possible for you, I would suggest you test them in a non- 
> production environment
> and see if it  helps with distribute config itself.
>
> Please do not use in production, for that wait for the release which  
> these patches go in.
>
> Do let me know if you have any questions about this.
>
> Regards,
> Tejas.
>
>
> ----- Original Message -----
> From: "Roberto Franchini" <ro.franchini at gmail.com>
> To: "gluster-users" <gluster-users at gluster.org>
> Sent: Wednesday, May 19, 2010 5:29:47 PM
> Subject: Re: [Gluster-users] Running Gluster client/server on single  
> process
>
> On Sat, May 15, 2010 at 10:06 PM, Craig Carl <craig at gluster.com>  
> wrote:
>> Robert -
>>       NUFA has been deprecated and doesn't apply to any recent  
>> version of
>> Gluster. What version are you running? ('glusterfs --version')
>
> We run 3.0.4 on ubuntu 9.10 and 10.04 server.
> Is there a way to mimic NUFA behaviour?
>
> We are using gluster to store Lucene indexes. Indexes are created
> locally from milions of small files and then copied to the storage.
> I tried read this little files from gluster but was too slow.
> So maybe a NUFA way, e.g. prefer local disk for read, could improve  
> performance.
> Let me know :)
>
> At the moment we use dht/replicate:
>
>
> #CLIENT
>
> volume remote1
> type protocol/client
> option transport-type tcp
> option remote-host zeus
> option remote-subvolume brick
> end-volume
>
> volume remote2
> type protocol/client
> option transport-type tcp
> option remote-host hera
> option remote-subvolume brick
> end-volume
>
> volume remote3
> type protocol/client
> option transport-type tcp
> option remote-host apollo
> option remote-subvolume brick
> end-volume
>
> volume remote4
> type protocol/client
> option transport-type tcp
> option remote-host demetra
> option remote-subvolume brick
> end-volume
>
> volume remote5
> type protocol/client
> option transport-type tcp
> option remote-host ade
> option remote-subvolume brick
> end-volume
>
> volume remote6
> type protocol/client
> option transport-type tcp
> option remote-host athena
> option remote-subvolume brick
> end-volume
>
> volume replicate1
> type cluster/replicate
> subvolumes remote1 remote2
> end-volume
>
> volume replicate2
> type cluster/replicate
> subvolumes remote3 remote4
> end-volume
>
> volume replicate3
> type cluster/replicate
> subvolumes remote5 remote6
> end-volume
>
> volume distribute
> type cluster/distribute
> subvolumes replicate1 replicate2 replicate3
> end-volume
>
> volume writebehind
> type performance/write-behind
> option window-size 1MB
> subvolumes distribute
> end-volume
>
> volume quickread
> type performance/quick-read
> option cache-timeout 1         # default 1 second
> #  option max-file-size 256KB        # default 64Kb
> subvolumes writebehind
> end-volume
>
> ### Add io-threads for parallel requisitions
> volume iothreads
> type performance/io-threads
> option thread-count 16 # default is 16
> subvolumes quickread
> end-volume
>
>
> #SERVER
>
> volume posix
> type storage/posix
> option directory /data/export
> end-volume
>
> volume locks
> type features/locks
> subvolumes posix
> end-volume
>
> volume brick
> type performance/io-threads
> option thread-count 8
> subvolumes locks
> end-volume
>
> volume server
> type protocol/server
> option transport-type tcp
> option auth.addr.brick.allow *
> subvolumes brick
> end-volume
> -- 
> Roberto Franchini
> http://www.celi.it
> http://www.blogmeter.it
> http://www.memesphere.it
> Tel +39.011.562.71.15
> jabber:ro.franchini at gmail.com skype:ro.franchini
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users




More information about the Gluster-users mailing list