[Gluster-devel] Problems with self-heal
Amar S. Tumballi
amar at zresearch.com
Mon Feb 18 19:48:13 UTC 2008
Can you please let us know what version of fuse and glusterfs you are
running these tests from?
-amar
On Feb 18, 2008 11:03 PM, E-Comm Factory <sistemas at e-commfactory.com> wrote:
> Hello,
>
> I have 2 boxes with 4 unified disks (so i have 2 volumes). Then, in client
> side, i have set afr with this 2 virtual volumes.
>
> For testing purposes I've deleted one file on the second afr volume and
> then
> tried to self-heal the global afr but it crashes with this error:
>
> [afr.c:2754:afr_open] disk: self heal failed, returning EIOç
> [fuse-bridge.c:675:fuse_fd_cbk] glusterfs-fuse: 98: /fichero4.img => -1
> (5)
>
> An strace to the pid of the glusterfs-server running on the first afr
> volume
> crashes too when selfhealing:
>
> epoll_wait(6, {{EPOLLIN, {u32=6304624, u64=6304624}}}, 2, 4294967295) = 1
> read(4, out of memory
> 0x7fff9a3b1d90, 113) = 113
> read(4, Segmentation fault
>
> My server config file (same on both server boxes):
>
> # datastores
> volume disk1
> type storage/posix
> option directory /mnt/disk1
> end-volume
> volume disk2
> type storage/posix
> option directory /mnt/disk2
> end-volume
> volume disk3
> type storage/posix
> option directory /mnt/disk3
> end-volume
> volume disk4
> type storage/posix
> option directory /mnt/disk4
> end-volume
>
> # namespaces
> volume disk1-ns
> type storage/posix
> option directory /mnt/disk1-ns
> end-volume
> volume disk2-ns
> type storage/posix
> option directory /mnt/disk2-ns
> end-volume
> #volume disk3-ns
> # type storage/posix
> # option directory /mnt/disk3-ns
> #end-volume
> #volume disk4-ns
> # type storage/posix
> # option directory /mnt/disk4-ns
> #end-volume
>
> # afr de namespaces
> volume disk-ns-afr
> type cluster/afr
> subvolumes disk1-ns disk2-ns
> option scheduler random
> end-volume
>
> # unify de datastores
> volume disk-unify
> type cluster/unify
> subvolumes disk1 disk2 disk3 disk4
> option namespace disk-ns-afr
> option scheduler rr
> end-volume
>
> # performace para el disco
> volume disk-fs11
> type performance/io-threads
> option thread-count 8
> option cache-size 64MB
> subvolumes disk-unify
> end-volume
>
> # permitimos acceso a cualquier cliente
> volume server
> type protocol/server
> option transport-type tcp/server
> subvolumes disk-fs11
> option auth.ip.disk-fs11.allow *
> end-volume
>
> My client config file:
>
> volume disk-fs11
> type protocol/client
> option transport-type tcp/client
> option remote-host 192.168.1.34
> option remote-subvolume disk-fs11
> end-volume
>
> volume disk-fs12
> type protocol/client
> option transport-type tcp/client
> option remote-host 192.168.1.35
> option remote-subvolume disk-fs12
> end-volume
>
> volume disk
> type cluster/afr
> subvolumes disk-fs11 disk-fs12
> end-volume
>
> volume trace
> type debug/trace
> subvolumes disk
> # option includes open,close,create,readdir,opendir,closedir
> # option excludes lookup,read,write
> end-volume
>
> Anyone could help me?
>
> Thanks in advance.
>
> --
> ecomm
> sistemas at e-commfactory.com
> _______________________________________________
> Gluster-devel mailing list
> Gluster-devel at nongnu.org
> http://lists.nongnu.org/mailman/listinfo/gluster-devel
>
--
Amar Tumballi
Gluster/GlusterFS Hacker
[bulde on #gluster/irc.gnu.org]
http://www.zresearch.com - Commoditizing Supercomputing and Superstorage!
More information about the Gluster-devel
mailing list