[Gluster-users] NUFA + replicate
Daniel Jordan Bambach
dan at lateral.net
Tue Jun 16 14:01:14 UTC 2009
This probably wont appear properly (see below), but as I understand
it, NUFA wont replicate on its own, but you -could- NUFA across two or
more sets of replicated bricks as below:
This way you get the parallelism of NUFA and the redundancy through AFR.
{ecp1, ecp2} - replicate {ecp3, ecp4} - replicate
| |
----------------------------------------
|
NUFA
I have been proven wrong before tho!
D.
On 16 Jun 2009, at 14:08, Dave Drager wrote:
> I am not getting any kind of error like that in my logs. So basically,
> you are saying NUFA will replicate on its own? Because I am not seeing
> that on the brick's exported filesystems.
>
> What I am looking for, same as the original poster of the question, is
> the correct configuration with NUFA+Replication.
>
> Thanks,
> -Dave
>
> On Tue, Jun 16, 2009 at 6:59 AM, Gururaj K<guru at gluster.com> wrote:
>> Hi Dave,
>>
>> Please note that there is an error in your config file. The volumes
>> "replication" and "cluster" are unused. (Note the error messages in
>> your client logs about the volumes "dangling"). It is indeed
>> possible to have either server side or client side replication with
>> nufa.
>>
>> Thanks,
>> -gururaj
>>
>>
>> ----- Original Message -----
>> From: "Dave Drager" <ddrager at gmail.com>
>> To: "Federico Sacerdoti" <Federico.Sacerdoti at deshawresearch.com>
>> Cc: gluster-users at gluster.org, cs at gluster.com
>> Sent: Tuesday, 16 June, 2009 02:11:41 GMT +05:30 Chennai, Kolkata,
>> Mumbai, New Delhi
>> Subject: Re: [Gluster-users] NUFA + replicate
>>
>> I am using the following config, but replication does not appear to
>> be
>> taking place between the nodes. If I take down the node the file was
>> created on, the other nodes can not see it. My only idea is that once
>> it sees the nufa translator, it does not recognize the other volume
>> types. If this is so, is there any way to have replication with nufa?
>>
>> Anyone have thoughts? Config below:
>>
>> Thanks in advance.
>>
>> -Dave
>>
>> volume posix
>> type storage/posix
>> option directory /data
>> end-volume
>>
>> volume locks
>> type features/locks
>> subvolumes posix
>> end-volume
>>
>> volume brick
>> type performance/io-threads
>> subvolumes locks
>> end-volume
>>
>> volume server
>> type protocol/server
>> option transport-type tcp
>> # option transport.socket.bind-address 192.168.0.2 # Default is
>> to
>> listen on all interfaces
>> option transport.socket.listen-port 6996
>> option auth.addr.brick.allow 192.168.0.*
>> subvolumes brick
>> end-volume
>>
>> volume ecp1
>> type protocol/client
>> option transport-type tcp
>> option remote-host ecp1.razorcloud.gfs
>> option remote-subvolume brick
>> end-volume
>>
>> volume ecp2
>> type protocol/client
>> option transport-type tcp
>> option remote-host ecp2.razorcloud.gfs
>> option remote-subvolume brick
>> end-volume
>>
>> volume ecp3
>> type protocol/client
>> option transport-type tcp
>> option remote-host ecp3.razorcloud.gfs
>> option remote-subvolume brick
>> end-volume
>>
>> volume nufa
>> type cluster/nufa
>> option local-volume-name `hostname` # note the backquote, so
>> 'hostname' output will be used as the option.
>> # option lookup-unhashed yes
>> subvolumes ecp1 ecp2 ecp3
>> end-volume
>>
>> volume replication
>> type cluster/replicate
>> subvolumes ecp1 ecp2 ecp3
>> end-volume
>>
>> volume cluster
>> type cluster/distribute
>> # option lookup-unhashed yes
>> option min-free-disk 20%
>> subvolumes ecp1 ecp2 ecp3
>> end-volume
>>
>> volume writebehind
>> type performance/write-behind
>> option cache-size 1MB
>> subvolumes nufa
>> end-volume
>>
>> volume cache
>> type performance/io-cache
>> option cache-size 512MB
>> subvolumes writebehind
>> end-volume
>>
>>
>> On Mon, Jun 15, 2009 at 9:36 AM, Sacerdoti,
>> Federico<Federico.Sacerdoti at deshawresearch.com> wrote:
>>> Hello,
>>>
>>> Can you provide an example of a NUFA+replicate config? I would
>>> like to
>>> test its performance on 100 nodes, compared to the distribute
>>> +replicate,
>>> which I've already done.
>>>
>>> Thanks,
>>> fds
>>>
>>>
>>> _______________________________________________
>>> Gluster-users mailing list
>>> Gluster-users at gluster.org
>>> http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users
>>>
>>
>> _______________________________________________
>> Gluster-users mailing list
>> Gluster-users at gluster.org
>> http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users
>>
>>
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users
>
More information about the Gluster-users
mailing list