[Gluster-devel] strange thing with afr

Krishna Srinivas krishna at zresearch.com
Tue Nov 27 18:50:14 UTC 2007


Hi Albert,

Are you still facing problem?

Can you paste your config spec file from all the servers and clients?

> > And what's I mean when I say it's not working is when a ns create in
> > my unify volume the zero size file create is namespace is create only on
> > node1, not on node-2 or node-3.

Is this the only problem you are facing?

Regards
Krishna

On Nov 27, 2007 6:17 AM, tony han <hantuo1984 at gmail.com> wrote:
>
>   On Nov 27, 2007 7:45 AM, Albert Shih <Albert.Shih at obspm.fr> wrote:
>
> > Hi All
> >
> > I've very strange thing with afr.
> >
> > I'm making a namespace with afr and it's not working....sometime.
> >
> > First it's working well...I've reboot everything and ... it's not working.
> >
> > Now I've re-initial everything in glusterfs (make new mke2fs etc...)
> > and...it's not working..
> >
> > volume node1-ns
> >  type protocol/client
> >  option transport-type tcp/client     # for TCP/IP transport
> >  option remote-host 145.238.189.22
> >  option transport-timeout 30
> >  option remote-subvolume brick-ns
> > end-volume
> > volume node2-ns
> >  type protocol/client
> >  option transport-type tcp/client     # for TCP/IP transport
> >  option remote-host 145.238.189.23
> >  option transport-timeout 30
> >  option remote-subvolume brick-ns
> > end-volume
> > volume node3-ns
> >  type protocol/client
> >  option transport-type tcp/client     # for TCP/IP transport
> >  option remote-host 145.238.189.24
> >  option transport-timeout 30
> >  option remote-subvolume brick-ns
> > end-volume
> >
> > # Definition du namespace via afr
> >
> > volume ns
> >  type cluster/afr
> >  subvolumes node1-ns node2-ns node3-ns
> > end-volume
> >
> > I'm using in my unify volume
> >
> >  option namespace ns
> >
> > And what's I mean when I say it's not working is when a ns create in
> > my unify volume the zero size file create is namespace is create only on
> > node1, not on node-2 or node-3.
> >
> > Why that ?
>
>
> check the network with "PING",and make sure the server process which called
> "glusterfsd" is running.
> if it dose not work, try to use : glusterfsd -f "your config file" -L DEBUG
> -l /dev/stdout
> then, the output in stdout will tell you what happend :-)
>
> >
> >
> > Other question about afr : It's that possible to re-synchronize the "raid"
> >
> > ? (for example when one node is off-line ? ) or it's automatic ?
> >
>
> for this qus, you can have a see :
> http://www.gluster.org/docs/index.php/Understanding_AFR_Translator#Self-Heal
> use the command :
> $ find /mnt/glusterfs -type f -exec head -c 1 {} \; >/dev/null
> glusterfs AFR is heal on reading.
>
>
> >
> > Regards.
> >
> >
> > --
> > Albert SHIH
> > Observatoire de Paris Meudon
> > SIO batiment 15
> > Heure local/Local time:
> > Mar 27 nov 2007 00:40:02 CET
> >
> >
> > _______________________________________________
> > Gluster-devel mailing list
> > Gluster-devel at nongnu.org
> > http://lists.nongnu.org/mailman/listinfo/gluster-devel
> >
>
>
>
> --
> --
>    此致,
> 敬礼!
>
>                     韩拓
>
>
>
> --
> --
>    此致,
> 敬礼!
>
>                     韩拓
>
> _______________________________________________
> Gluster-devel mailing list
> Gluster-devel at nongnu.org
> http://lists.nongnu.org/mailman/listinfo/gluster-devel
>
>


More information about the Gluster-devel mailing list