[Gluster-users] Gluster-users Digest, Vol 35, Issue 27

Hareem Haque hareem.haque at gmail.com
Fri Mar 11 18:08:33 UTC 2011


most important of all features.. the need for auto healing and recovery. The
current self trigger healing is garbage.

Best Regards
Hareem. Haque



On Fri, Mar 11, 2011 at 12:55 PM, <gluster-users-request at gluster.org> wrote:

> Send Gluster-users mailing list submissions to
>        gluster-users at gluster.org
>
> To subscribe or unsubscribe via the World Wide Web, visit
>        http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
> or, via email, send a message with subject or body 'help' to
>        gluster-users-request at gluster.org
>
> You can reach the person managing the list at
>        gluster-users-owner at gluster.org
>
> When replying, please edit your Subject line so it is more specific
> than "Re: Contents of Gluster-users digest..."
>
>
> Today's Topics:
>
>   1. Re: Seeking Feedback on Gluster Development
>      Priorities/Roadmap (Kon Wilms)
>   2. Re: How to use gluster for WAN/Data Center        replication
>      (anthony garnier)
>   3. Re: Mac / NFS problems (Shehjar Tikoo)
>   4. Repost: read access to replicate copies (Rosario Esposito)
>   5. Re: Why does this setup not survive a node crash? (Burnash, James)
>   6. Re: How to use gluster for WAN/Data Center        replication
>      (Mohit Anchlia)
>
>
> ----------------------------------------------------------------------
>
> Message: 1
> Date: Thu, 10 Mar 2011 16:16:56 -0800
> From: Kon Wilms <konfoo at gmail.com>
> Subject: Re: [Gluster-users] Seeking Feedback on Gluster Development
>        Priorities/Roadmap
> To: gluster-users at gluster.org
> Message-ID:
>        <AANLkTikSEGu30+smCS5Wna2499ZAyfn_9OGQLG+gm-93 at mail.gmail.com>
> Content-Type: text/plain; charset=ISO-8859-1
>
> 1. Stability
> 2. Stability
> 3. Stability
>
> If my customers lose one file, everything else is irrelevant. It
> really is that simple.
>
> Cheers
> Kon
>
>
> ------------------------------
>
> Message: 2
> Date: Fri, 11 Mar 2011 09:40:19 +0000
> From: anthony garnier <sokar6012 at hotmail.com>
> Subject: Re: [Gluster-users] How to use gluster for WAN/Data Center
>        replication
> To: <mohitanchlia at gmail.com>, <gluster-users at gluster.org>
> Message-ID: <BAY139-W25FA0D61BFAAB6C63E0D6CAECB0 at phx.gbl>
> Content-Type: text/plain; charset="iso-8859-1"
>
>
> Hi,
> The GSLB+RR is especially usefull for nfs client in fact, for gluster
> client, it's just for volfile.
> The process to remove node entry from DNS is indeed manual, we are looking
> for a way to do it automaticaly, maybe with script....
> What do you mean by "How do you ensure that a copy of file in one site
> definitely is saved on other site as well?"
> Servers from replica1 and 2 are mixed between  the 2 datacenter,
> Replica pool 1 : Brick 1,2,3,4
> Replica pool 2 : Brick 5,6,7,8
>
> Datacenter 1 : Brick 1,2,5,6
> Datacenter 2 : Brick 3,4,7,8
>
> In this way, each Datacenter got 2 replica of the file. Each Datacenter
> could be independent if there is a Wan interruption.
>
> Regards,
> Anthony
>
> > Date: Thu, 10 Mar 2011 12:00:53 -0800
> > Subject: Re: How to use gluster for WAN/Data Center replication
> > From: mohitanchlia at gmail.com
> > To: sokar6012 at hotmail.com; gluster-users at gluster.org
> >
> > Thanks for the info! I am assuming it's a manual process to remove
> > nodes from the DNS?
> >
> > If I am not wrong I think load balancing by default occurs for native
> > gfs client that you are using. Initial mount is required only to read
> > volfile.
> >
> > How do you ensure that a copy of file in one site definitely is saved
> > on other site as well?
> >
> > On Thu, Mar 10, 2011 at 1:11 AM, anthony garnier <sokar6012 at hotmail.com>
> wrote:
> > > Hi,
> > > I have done a setup(see my setup below) on multi site datacenter with
> > > gluster and currently it doesn't work properly but there is some
> workaround.
> > > The main problem is that replication is synchronous and there is
> currently
> > > no way to turn it in async mod. I've done some test
> > > (iozone,tar,bonnie++,script...) and performance
> > > is poor with small files especially. We are using an url to access
> servers :
> > > glusterfs.cluster.inetcompany.com
> > > This url is in DNS GSLB(geo DNS)+RR (Round Robin)
> > > It means that client from datacenter 1 will always be binded randomly
> on
> > > storage node from his Datacenter.
> > > They use this command for mounting the filesystem :
> > > mount -t glusterfs glusterfs.cluster.inetcompany.com:/venus
> > > /users/glusterfs_mnt
> > >
> > > If one node fails , it is remove from de list of the DNS, client do a
> new
> > > DNS query and he is binded on active node of his Datacenter.
> > > You could use Wan accelerator also.
> > >
> > > We currently are in intra site mode and we are waiting for Async
> replication
> > > feature expected in version 3.2. It should come soon.
> > >
> > >
> > > Volume Name: venus
> > > Type: Distributed-Replicate
> > > Status: Started
> > > Number of Bricks: 2 x 4 = 8
> > > Transport-type: tcp
> > > Bricks:
> > > Brick1: serv1:/users/exp1  \
> > > Brick2: serv2:/users/exp2   > R?plica pool 1 \
> > > Brick3: serv3:/users/exp3  /                  \
> > > Brick4: serv4:/users/exp4                       =Envoyer>Distribution
> > > Brick5: serv5:/users/exp5  \                  /
> > > Brick6: serv6:/users/exp6   > R?plica pool 2 /
> > > Brick7: serv7:/users/exp7  /
> > > Brick8: serv8:/users/exp8
> > >
> > > Datacenter 1 : Brick 1,2,5,6
> > > Datacenter 2 : Brick 3,4,7,8
> > > Distance between Datacenters : 500km
> > > Latency between Datacenters : 11ms
> > > Datarate between Datacenters : ~100Mb/s
> > >
> > >
> > >
> > > Regards,
> > > Anthony
> > >
> > >
> > >
> > >>Message: 3
> > >>Date: Wed, 9 Mar 2011 16:44:27 -0800
> > >>From: Mohit Anchlia <mohitanchlia at gmail.com>
> > >>Subject: [Gluster-users] How to use gluster for WAN/Data Center
> > >>    replication
> > >>To: gluster-users at gluster.org
> > >>Message-ID:
> > >>    <AANLkTi=dkK=zX0QdCfnKeLJ5nkF1dF3+g1hxDzFZNvwx at mail.gmail.com>
> > >>Content-Type: text/plain; charset=ISO-8859-1
> > >>
> > >>How to setup gluster for WAN/Data Center replication? Are there others
> > >>using it this way?
> > >>
> > >>Also, how to make the writes asynchronuous for data center replication?
> > >>
> > >>We have a requirement to replicate data to other data center as well.
>
>
> ------------------------------
>
> Message: 3
> Date: Fri, 11 Mar 2011 15:22:56 +0530
> From: Shehjar Tikoo <shehjart at gluster.com>
> Subject: Re: [Gluster-users] Mac / NFS problems
> To: David Lloyd <david.lloyd at v-consultants.co.uk>
> Cc: "gluster-users at gluster.org" <gluster-users at gluster.org>
> Message-ID: <4D79F0F8.8020006 at gluster.com>
> Content-Type: text/plain; charset=ISO-8859-1; format=flowed
>
> David Lloyd wrote:
> > Hello,
> >
> > Were having issues with macs writing to our gluster system.
> > Gluster vol info at end.
> >
> > On a mac, if I make a file in the shell I get the following message:
> >
> > smoke:hunter david$ echo hello > test
> > -bash: test: Operation not permitted
> >
>
> I can help if you can send the nfs.log file from the /etc/glusterd
> directory on the nfs server. Before your mount command, set the log-level
> to trace for nfs server and then run the echo command above. Unmount as
> soon as you see the error above and email me the nfs.log.
>
> -Shehjar
>
>
>
> >
> > And the file is made but is zero size.
> >
> > smoke:hunter david$ ls -l test
> > -rw-r--r--  1 david  realise  0 Mar  3 08:44 test
> >
> >
> > glusterfs/nfslog logs thus:
> >
> > [2011-03-03 08:44:10.379188] I [io-stats.c:333:io_stats_dump_fd]
> > glustervol1: --- fd stats ---
> >
> > [2011-03-03 08:44:10.379222] I [io-stats.c:338:io_stats_dump_fd]
> > glustervol1:       Filename : /production/hunter/test
> >
> > Then try to open the file:
> >
> > smoke:hunter david$ cat test
> >
> > and get the following messages in the log:
> >
> > [2011-03-03 08:51:13.957319] I [afr-common.c:716:afr_lookup_done]
> > glustervol1-replicate-0: background  meta-data self-heal triggered. path:
> > /production/hunter/test
> > [2011-03-03 08:51:13.959466] I
> > [afr-self-heal-common.c:1526:afr_self_heal_completion_cbk]
> > glustervol1-replicate-0: background  meta-data self-heal completed on
> > /production/hunter/test
> >
> > If I do the same test on a linux machine (nfs) it's fine.
> >
> > We get the same issue on all the macs. They are 10.6.6.
> >
> > Gluster volume is mounted:
> > /n/auto/gv1             -rw,hard,tcp,rsize=32768,wsize=32768,intr
> > gus:/glustervol1
> > Other nfs mounts on mac (from linux servers) are OK
> >
> > We're using LDAP to authenticate on the macs, the gluster servers aren't
> > bound into the LDAP domain.
> >
> > Any ideas?
> >
> > Thanks
> > David
> >
> >
> > g3:/var/log/glusterfs # gluster volume info
> > Volume Name: glustervol1
> > Type: Distributed-Replicate
> > Status: Started
> > Number of Bricks: 4 x 2 = 8
> > Transport-type: tcp
> > Bricks:
> > Brick1: g1:/mnt/glus1
> > Brick2: g2:/mnt/glus1
> > Brick3: g3:/mnt/glus1
> > Brick4: g4:/mnt/glus1
> > Brick5: g1:/mnt/glus2
> > Brick6: g2:/mnt/glus2
> > Brick7: g3:/mnt/glus2
> > Brick8: g4:/mnt/glus2
> > Options Reconfigured:
> > performance.stat-prefetch: 1
> > performance.cache-size: 1gb
> > performance.write-behind-window-size: 1mb
> > network.ping-timeout: 20
> > diagnostics.latency-measurement: off
> > diagnostics.dump-fd-stats: on
> >
> >
> >
> >
> >
> >
> >
> > ------------------------------------------------------------------------
> >
> > _______________________________________________
> > Gluster-users mailing list
> > Gluster-users at gluster.org
> > http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
>
>
>
> ------------------------------
>
> Message: 4
> Date: Fri, 11 Mar 2011 15:20:43 +0100
> From: Rosario Esposito <resposit at na.infn.it>
> Subject: [Gluster-users] Repost: read access to replicate copies
> To: gluster-users at gluster.org
> Message-ID: <4D7A2FBB.9090906 at na.infn.it>
> Content-Type: text/plain; charset=ISO-8859-15; format=flowed
>
>
> This is a repost, I hope gluster developers can answer this question:
>
> If I have a distributed/replicated volume and a gluster native client
> needs to read a file, which server will be chosen ?
>
> Let's say I have a 2-nodes cluster running the following gluster
> configuration:
>
> ---
> Volume Name: myvolume
> Type: Replicate
> Status: Started
> Number of Bricks: 2
> Transport-type: tcp
> Bricks:
> Brick1: host1:/brick
> Brick2: host2:/brick
> ---
>
> host1 and host2 are also gluster native clients, mounting "myvolume" in
> /gluster
>
> e.g.
>
> [root at host1 ~]# mount | egrep "brick|gluster"
> /dev/sda1 on /brick type ext3 (rw)
> glusterfs#host1:/myvolume on /gluster type fuse
> (rw,allow_other,default_permissions,max_read=131072)
>
> [root at host2 ~]# mount | egrep "brick|gluster"
> /dev/sda1 on /brick type ext3 (rw)
> glusterfs#host2:/myvolume on /gluster type fuse
> (rw,allow_other,default_permissions,max_read=131072)
>
>
> If host1 needs to read the file /gluster/myfile will it use the local
> copy from host1:/brick or the other copy from host2:/brick over the
> network ?
> Is there a way to force the client to read the local copy ?
>
> Cheers, Rosario
>
>
> ------------------------------
>
> Message: 5
> Date: Fri, 11 Mar 2011 11:31:03 -0500
> From: "Burnash, James" <jburnash at knight.com>
> Subject: Re: [Gluster-users] Why does this setup not survive a node
>        crash?
> To: "gluster-users at gluster.org" <gluster-users at gluster.org>
> Message-ID:
>        <
> 9AD565C4A8561349B7227B79DDB9887369A8D0947A at EXCHANGE3.global.knight.com>
>
> Content-Type: text/plain; charset="us-ascii"
>
> Could anyone else please take a peek at this an sanity check my
> configuration. I'm quite frankly at a loss and tremendously under the gun
> ...
>
> Thanks in advance to any kind souls.
>
> James Burnash, Unix Engineering
>
> -----Original Message-----
> From: gluster-users-bounces at gluster.org [mailto:
> gluster-users-bounces at gluster.org] On Behalf Of Burnash, James
> Sent: Thursday, March 10, 2011 3:55 PM
> To: gluster-users at gluster.org
> Subject: [Gluster-users] Why does this setup not survive a node crash?
>
> Perhaps someone will see immediately, given the data below, why this
> configuration will not survive a crash of one node - it appears that any
> node crashed out of this set will cause gluster native clients to hang until
> the node comes back.
>
> Given (2) initial storage servers (CentOS 5.5, Gluster 3.1.1):
>
> Starting out by creating a Replicated-Distributed pair with this command:
> gluster volume create test-pfs-ro1 replica 2
> jc1letgfs5:/export/read-only/g01 jc1letgfs6:/export/read-only/g01
> jc1letgfs5:/export/read-only/g02 jc1letgfs6:/export/read-only/g02
>
> Which ran fine (thought I did not attempt to crash 1 of the pair)
>
> And then adding (2) more servers, identically configured, with this
> command:
> gluster volume add-brick test-pfs-ro1 jc1letgfs7:/export/read-only/g01
> jc1letgfs8:/export/read-only/g01 jc1letgfs7:/export/read-only/g02
> jc1letgfs8:/export/read-only/g02
> Add Brick successful
>
> root at jc1letgfs5:~# gluster volume info
>
> Volume Name: test-pfs-ro1
> Type: Distributed-Replicate
> Status: Started
> Number of Bricks: 4 x 2 = 8
> Transport-type: tcp
> Bricks:
> Brick1: jc1letgfs5:/export/read-only/g01
> Brick2: jc1letgfs6:/export/read-only/g01
> Brick3: jc1letgfs5:/export/read-only/g02
> Brick4: jc1letgfs6:/export/read-only/g02
> Brick5: jc1letgfs7:/export/read-only/g01
> Brick6: jc1letgfs8:/export/read-only/g01
> Brick7: jc1letgfs7:/export/read-only/g02
> Brick8: jc1letgfs8:/export/read-only/g02
>
> And this volfile info out of the log file
> /var/log/glusterfs/etc-glusterd-mount-test-pfs-ro1.log:
>
> [2011-03-10 14:38:26.310807] W [dict.c:1204:data_to_str] dict: @data=(nil)
> Given volfile:
>
> +------------------------------------------------------------------------------+
>  1: volume test-pfs-ro1-client-0
>  2:     type protocol/client
>  3:     option remote-host jc1letgfs5
>  4:     option remote-subvolume /export/read-only/g01
>  5:     option transport-type tcp
>  6: end-volume
>  7:
>  8: volume test-pfs-ro1-client-1
>  9:     type protocol/client
>  10:     option remote-host jc1letgfs6
>  11:     option remote-subvolume /export/read-only/g01
>  12:     option transport-type tcp
>  13: end-volume
>  14:
>  15: volume test-pfs-ro1-client-2
>  16:     type protocol/client
>  17:     option remote-host jc1letgfs5
>  18:     option remote-subvolume /export/read-only/g02
>  19:     option transport-type tcp
>  20: end-volume
>  21:
>  22: volume test-pfs-ro1-client-3
>  23:     type protocol/client
>  24:     option remote-host jc1letgfs6
>  25:     option remote-subvolume /export/read-only/g02
>  26:     option transport-type tcp
>  27: end-volume
>  28:
>  29: volume test-pfs-ro1-client-4
>  30:     type protocol/client
>  31:     option remote-host jc1letgfs7
>  32:     option remote-subvolume /export/read-only/g01
>  33:     option transport-type tcp
>  34: end-volume
>  35:
> 36: volume test-pfs-ro1-client-5
>  37:     type protocol/client
>  38:     option remote-host jc1letgfs8
>  39:     option remote-subvolume /export/read-only/g01
>  40:     option transport-type tcp
>  41: end-volume
>  42:
>  43: volume test-pfs-ro1-client-6
>  44:     type protocol/client
>  45:     option remote-host jc1letgfs7
>  46:     option remote-subvolume /export/read-only/g02
>  47:     option transport-type tcp
>  48: end-volume
>  49:
>  50: volume test-pfs-ro1-client-7
>  51:     type protocol/client
>  52:     option remote-host jc1letgfs8
>  53:     option remote-subvolume /export/read-only/g02
>  54:     option transport-type tcp
>  55: end-volume
>  56:
>  57: volume test-pfs-ro1-replicate-0
>  58:     type cluster/replicate
>  59:     subvolumes test-pfs-ro1-client-0 test-pfs-ro1-client-1
>  60: end-volume
>  61:
>  62: volume test-pfs-ro1-replicate-1
>  63:     type cluster/replicate
>  64:     subvolumes test-pfs-ro1-client-2 test-pfs-ro1-client-3
>  65: end-volume
>  66:
>  67: volume test-pfs-ro1-replicate-2
>  68:     type cluster/replicate
>  69:     subvolumes test-pfs-ro1-client-4 test-pfs-ro1-client-5
>  70: end-volume
>  71:
>  72: volume test-pfs-ro1-replicate-3
>  73:     type cluster/replicate
>  74:     subvolumes test-pfs-ro1-client-6 test-pfs-ro1-client-7
>  75: end-volume
>  76:
>  77: volume test-pfs-ro1-dht
>  78:     type cluster/distribute
>  79:     subvolumes test-pfs-ro1-replicate-0 test-pfs-ro1-replicate-1
> test-pfs-ro1-replicate-2 test-pfs-ro1-replicate-3
>  80: end-volume
>  81:
>  82: volume test-pfs-ro1-write-behind
>  83:     type performance/write-behind
>  84:     subvolumes test-pfs-ro1-dht
>  85: end-volume
>  86:
>  87: volume test-pfs-ro1-read-ahead
>  88:     type performance/read-ahead
>  89:     subvolumes test-pfs-ro1-write-behind
>  90: end-volume
>  91:
>  92: volume test-pfs-ro1-io-cache
>  93:     type performance/io-cache
>  94:     subvolumes test-pfs-ro1-read-ahead
>  95: end-volume
>  96:
>  97: volume test-pfs-ro1-quick-read
>  98:     type performance/quick-read
>  99:     subvolumes test-pfs-ro1-io-cache
> 100: end-volume
> 101:
> 102: volume test-pfs-ro1-stat-prefetch
> 103:     type performance/stat-prefetch
> 104:     subvolumes test-pfs-ro1-quick-read
> 105: end-volume
> 106:
> 107: volume test-pfs-ro1
> 108:     type debug/io-stats
> 109:     subvolumes test-pfs-ro1-stat-prefetch
> 110: end-volume
>
> Any input would be greatly appreciated. I'm working beyond my deadline
> already, and I'm guessing that I'm not seeing the forest for the trees here.
>
> James Burnash, Unix Engineering
>
>
> DISCLAIMER:
> This e-mail, and any attachments thereto, is intended only for use by the
> addressee(s) named herein and may contain legally privileged and/or
> confidential information. If you are not the intended recipient of this
> e-mail, you are hereby notified that any dissemination, distribution or
> copying of this e-mail, and any attachments thereto, is strictly prohibited.
> If you have received this in error, please immediately notify me and
> permanently delete the original and any copy of any e-mail and any printout
> thereof. E-mail transmission cannot be guaranteed to be secure or
> error-free. The sender therefore does not accept liability for any errors or
> omissions in the contents of this message which arise as a result of e-mail
> transmission.
> NOTICE REGARDING PRIVACY AND CONFIDENTIALITY Knight Capital Group may, at
> its discretion, monitor and review the content of all e-mail communications.
> http://www.knight.com
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
>
>
> ------------------------------
>
> Message: 6
> Date: Fri, 11 Mar 2011 09:53:40 -0800
> From: Mohit Anchlia <mohitanchlia at gmail.com>
> Subject: Re: [Gluster-users] How to use gluster for WAN/Data Center
>        replication
> To: anthony garnier <sokar6012 at hotmail.com>, gluster-users at gluster.org
> Message-ID:
>        <AANLkTikTcu+ONJgz4dggbzaxruBxOQadQ8TUFZA_r9k9 at mail.gmail.com>
> Content-Type: text/plain; charset=ISO-8859-1
>
> If you mentioned replica of 2 then I believe there will be only 2
> writes not 4. My understanding is that for replica 2 if a file is
> created on brick 1 then it will be replicated to brick 2. This makes
> for replica 2.
>
> Have you seen it otherwise?
>
> On Fri, Mar 11, 2011 at 1:40 AM, anthony garnier <sokar6012 at hotmail.com>
> wrote:
> > Hi,
> > The GSLB+RR is especially usefull for nfs client in fact, for gluster
> > client, it's just for volfile.
> > The process to remove node entry from DNS is indeed manual, we are
> looking
> > for a way to do it automaticaly, maybe with script....
> > What do you mean by "How do you ensure that a copy of file in one site
> > definitely is saved on other site as well?"
> > Servers from replica1 and 2 are mixed between? the 2 datacenter,
> > Replica pool 1 : Brick 1,2,3,4
> > Replica pool 2 : Brick 5,6,7,8
> >
> > Datacenter 1 : Brick 1,2,5,6
> > Datacenter 2 : Brick 3,4,7,8
> >
> > In this way, each Datacenter got 2 replica of the file. Each Datacenter
> > could be independent if there is a Wan interruption.
> >
> > Regards,
> > Anthony
> >
> >> Date: Thu, 10 Mar 2011 12:00:53 -0800
> >> Subject: Re: How to use gluster for WAN/Data Center replication
> >> From: mohitanchlia at gmail.com
> >> To: sokar6012 at hotmail.com; gluster-users at gluster.org
> >>
> >> Thanks for the info! I am assuming it's a manual process to remove
> >> nodes from the DNS?
> >>
> >> If I am not wrong I think load balancing by default occurs for native
> >> gfs client that you are using. Initial mount is required only to read
> >> volfile.
> >>
> >> How do you ensure that a copy of file in one site definitely is saved
> >> on other site as well?
> >>
> >> On Thu, Mar 10, 2011 at 1:11 AM, anthony garnier <sokar6012 at hotmail.com
> >
> >> wrote:
> >> > Hi,
> >> > I have done a setup(see my setup below) on multi site datacenter with
> >> > gluster and currently it doesn't work properly but there is some
> >> > workaround.
> >> > The main problem is that replication is synchronous and there is
> >> > currently
> >> > no way to turn it in async mod. I've done some test
> >> > (iozone,tar,bonnie++,script...) and performance
> >> > is poor with small files especially. We are using an url to access
> >> > servers :
> >> > glusterfs.cluster.inetcompany.com
> >> > This url is in DNS GSLB(geo DNS)+RR (Round Robin)
> >> > It means that client from datacenter 1 will always be binded randomly
> on
> >> > storage node from his Datacenter.
> >> > They use this command for mounting the filesystem :
> >> > mount -t glusterfs glusterfs.cluster.inetcompany.com:/venus
> >> > /users/glusterfs_mnt
> >> >
> >> > If one node fails , it is remove from de list of the DNS, client do a
> >> > new
> >> > DNS query and he is binded on active node of his Datacenter.
> >> > You could use Wan accelerator also.
> >> >
> >> > We currently are in intra site mode and we are waiting for Async
> >> > replication
> >> > feature expected in version 3.2. It should come soon.
> >> >
> >> >
> >> > Volume Name: venus
> >> > Type: Distributed-Replicate
> >> > Status: Started
> >> > Number of Bricks: 2 x 4 = 8
> >> > Transport-type: tcp
> >> > Bricks:
> >> > Brick1: serv1:/users/exp1 \
> >> > Brick2: serv2:/users/exp2 > R?plica pool 1 \
> >> > Brick3: serv3:/users/exp3 / \
> >> > Brick4: serv4:/users/exp4 =Envoyer>Distribution
> >> > Brick5: serv5:/users/exp5 \ /
> >> > Brick6: serv6:/users/exp6 > R?plica pool 2 /
> >> > Brick7: serv7:/users/exp7 /
> >> > Brick8: serv8:/users/exp8
> >> >
> >> > Datacenter 1 : Brick 1,2,5,6
> >> > Datacenter 2 : Brick 3,4,7,8
> >> > Distance between Datacenters : 500km
> >> > Latency between Datacenters : 11ms
> >> > Datarate between Datacenters : ~100Mb/s
> >> >
> >> >
> >> >
> >> > Regards,
> >> > Anthony
> >> >
> >> >
> >> >
> >> >>Message: 3
> >> >>Date: Wed, 9 Mar 2011 16:44:27 -0800
> >> >>From: Mohit Anchlia <mohitanchlia at gmail.com>
> >> >>Subject: [Gluster-users] How to use gluster for WAN/Data Center
> >> >> replication
> >> >>To: gluster-users at gluster.org
> >> >>Message-ID:
> >> >> <AANLkTi=dkK=zX0QdCfnKeLJ5nkF1dF3+g1hxDzFZNvwx at mail.gmail.com>
> >> >>Content-Type: text/plain; charset=ISO-8859-1
> >> >>
> >> >>How to setup gluster for WAN/Data Center replication? Are there others
> >> >>using it this way?
> >> >>
> >> >>Also, how to make the writes asynchronuous for data center
> replication?
> >> >>
> >> >>We have a requirement to replicate data to other data center as well.
> >
>
>
> ------------------------------
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
>
>
> End of Gluster-users Digest, Vol 35, Issue 27
> *********************************************
>


More information about the Gluster-users mailing list