[Gluster-devel] GlusterFS unify mountpoint, if one of the bricks offline
NovA
av.nova at gmail.com
Fri Feb 29 11:17:10 UTC 2008
Oops, forgot to mention the system specs:
GlusterFS-1.3.8tla683, fuse-2.7.2-glfs8 @ openSUSE-10.3 x86_64
BTW, I've rebooted everything now, except that offline brick, and
/home is accesible again.
Andrey
2008/2/29, Anand Avati <avati at zresearch.com>:
> Andrey,
> which revision of glusterfs are you using?
>
> avati
>
> 2008/2/29, NovA <av.nova at gmail.com>:
> >
> > Hi everybody!
> >
> > I'm testing GlusterFS unify behaviour in case of failure of a brick.
> > My GlusterFS unify combines 23 bricks and is mounted at /home. Each
> > brick has posix->locks->threads xlators; unifying client has
> > nufa->threads->write_behind xlators.
> >
> > So, I started copying of some 20GB file to /home/tmp/file.rar and then
> > switched off one of the bricks (client #5-5 -> c55). The coping
> > continues flawlessly, listing of /home subdirs works fine, but "ls
> > /home" results in "Transport endpoint is not connected" and the
> > /var/log/glusterfs/client.log has the following errors:
> > ---
> > 2008-02-29 12:52:48 E [fuse-bridge.c:436:fuse_entry_cbk]
> > glusterfs-fuse: 264935: / => -1 (116)
> > 2008-02-29 12:52:48 E [tcp-client.c:190:tcp_connect] c55: non-blocking
> > connect() returned: 113 (No r
> > 2008-02-29 12:52:48 W [client-protocol.c:349:client_protocol_xfer]
> > c55: not connected at the moment
> > 2008-02-29 12:52:48 E [fuse-bridge.c:436:fuse_entry_cbk]
> > glusterfs-fuse: 264936: / => -1 (116)
> > 2008-02-29 12:52:48 W [client-protocol.c:349:client_protocol_xfer]
> > c55: not connected at the moment
> > 2008-02-29 12:52:48 W [client-protocol.c:349:client_protocol_xfer]
> > c55: not connected at the moment
> > 2008-02-29 12:52:48 E [fuse-bridge.c:675:fuse_fd_cbk] glusterfs-fuse:
> > 264938: / => -1 (107)
> > ---
> >
> > Is this expected behaviour? I thought, that when a brick is switched
> > off, some files in unify FS will temporary disappear, but all
> > directory tree should remain valid.
More information about the Gluster-devel
mailing list