[Gluster-users] starting 4th node in 4 node dht cluster fails
Craig Flockhart
craigflockhart at yahoo.com
Mon Feb 9 18:20:13 UTC 2009
I tried this but still get the same result (3 nodes ok, 4th node breaks it - doesn't matter which the 4th node is). "getfattr -d" returned nothing on the mount directory btw.
Config:
volume posix-d1
type storage/posix
option directory /mnt/chard1/export
end-volume
volume locks1
type features/locks
subvolumes posix-d1
end-volume
volume posix-d2
type storage/posix
option directory /mnt/chard2/export
end-volume
volume locks2
type features/locks
subvolumes posix-d2
end-volume
volume posix-d3
type storage/posix
option directory /mnt/chard3/export
end-volume
volume locks3
type features/locks
subvolumes posix-d3
end-volume
volume posix-d4
type storage/posix
option directory /mnt/chard4/export
end-volume
volume locks4
type features/locks
subvolumes posix-d4
end-volume
volume server
type protocol/server
option transport-type tcp
subvolumes locks1 locks2 locks3 locks4
option auth.addr.locks1.allow *
option auth.addr.locks2.allow *
option auth.addr.locks3.allow *
option auth.addr.locks4.allow *
end-volume
volume chard1
type protocol/client
option transport-type tcp
option remote-host char
option remote-subvolume locks1
end-volume
volume chard2
type protocol/client
option transport-type tcp
option remote-host char
option remote-subvolume locks2
end-volume
volume chard3
type protocol/client
option transport-type tcp
option remote-host char
option remote-subvolume locks3
end-volume
volume chard4
type protocol/client
option transport-type tcp
option remote-host char
option remote-subvolume locks4
end-volume
volume zweid1
type protocol/client
option transport-type tcp
option remote-host zwei
option remote-subvolume locks1
end-volume
volume zweid2
type protocol/client
option transport-type tcp
option remote-host zwei
option remote-subvolume locks2
end-volume
volume zweid3
type protocol/client
option transport-type tcp
option remote-host zwei
option remote-subvolume locks3
end-volume
volume zweid4
type protocol/client
option transport-type tcp
option remote-host zwei
option remote-subvolume locks4
end-volume
volume tresd1
type protocol/client
option transport-type tcp
option remote-host tres
option remote-subvolume locks1
end-volume
volume tresd2
type protocol/client
option transport-type tcp
option remote-host tres
option remote-subvolume locks2
end-volume
volume tresd3
type protocol/client
option transport-type tcp
option remote-host tres
option remote-subvolume locks3
end-volume
volume tresd4
type protocol/client
option transport-type tcp
option remote-host tres
option remote-subvolume locks4
end-volume
volume pented1
type protocol/client
option transport-type tcp
option remote-host pente
option remote-subvolume locks1
end-volume
volume pented2
type protocol/client
option transport-type tcp
option remote-host pente
option remote-subvolume locks2
end-volume
volume pented3
type protocol/client
option transport-type tcp
option remote-host pente
option remote-subvolume locks3
end-volume
volume pented4
type protocol/client
option transport-type tcp
option remote-host pente
option remote-subvolume locks4
end-volume
volume dist1
type cluster/distribute
subvolumes pented1 pented2 pented3 pented4 chard1 chard2 chard3 chard4 tresd1 tresd2 tresd3 tresd4 zweid1 zweid2 zweid3 zweid4
end-volume
________________________________
From: Krishna Srinivas <krishna at zresearch.com>
To: Craig Flockhart <craigflockhart at yahoo.com>
Cc: Amar Tumballi (bulde) <amar at gluster.com>; gluster-users at gluster.org
Sent: Saturday, February 7, 2009 3:14:54 AM
Subject: Re: [Gluster-users] starting 4th node in 4 node dht cluster fails
Craig,
Delete the backend directories (or remove trusted.glusterfs.dht xattr
on them and empty the backend directories) and are create them and
then start DHT and see if it works fine.
Krishna
On Sat, Feb 7, 2009 at 4:41 AM, Craig Flockhart
<craigflockhart at yahoo.com> wrote:
> Hi Amar,
> Thanks for the quick reply, but that doesn't work either. I just get more
> holes and overlaps:
>
> 2009-02-06 15:04:04 E [dht-layout.c:460:dht_layout_normalize] dist1: found
> anomalies in /. holes=3 overlaps=9
> 2009-02-06 15:04:04 W [dht-common.c:137:dht_lookup_dir_cbk] dist1: fixing
> assignment on /
> 2009-02-06 15:04:04 E [dht-selfheal.c:422:dht_selfheal_directory] dist1: the
> directory is not a virgin
> 2009-02-06 15:04:04 W [fuse-bridge.c:297:need_fresh_lookup] fuse-bridge:
> revalidate of / failed (Structure needs cleaning)
> 2009-02-06 15:04:04 E [dht-layout.c:460:dht_layout_normalize] dist1: found
> anomalies in /. holes=3 overlaps=9
> 2009-02-06 15:04:04 W [dht-common.c:137:dht_lookup_dir_cbk] dist1: fixing
> assignment on /
> 2009-02-06 15:04:04 E [dht-selfheal.c:422:dht_selfheal_directory] dist1: the
> directory is not a virgin
> 2009-02-06 15:04:04 E [fuse-bridge.c:404:fuse_entry_cbk] glusterfs-fuse: 2:
> LOOKUP() / => -1 (Structure needs cleaning)
> 2009-02-06 15:04:04 E [dht-layout.c:460:dht_layout_normalize] dist1: found
> anomalies in /. holes=3 overlaps=9
> 2009-02-06 15:04:04 W [dht-common.c:137:dht_lookup_dir_cbk] dist1: fixing
> assignment on /
> 2009-02-06 15:04:04 E [dht-selfheal.c:422:dht_selfheal_directory] dist1: the
> directory is not a virgin
> 2009-02-06 15:04:04 W [fuse-bridge.c:297:need_fresh_lookup] fuse-bridge:
> revalidate of / failed (Structure needs cleaning)
> 2009-02-06 15:04:04 E [dht-layout.c:460:dht_layout_normalize] dist1: found
> anomalies in /. holes=3 overlaps=9
> 2009-02-06 15:04:04 W [dht-common.c:137:dht_lookup_dir_cbk] dist1: fixing
> assignment on /
> 2009-02-06 15:04:04 E [dht-selfheal.c:422:dht_selfheal_directory] dist1: the
> directory is not a virgin
> 2009-02-06 15:04:04 E [fuse-bridge.c:404:fuse_entry_cbk] glusterfs-fuse: 3:
> LOOKUP() / => -1 (Structure needs cleaning)
> ________________________________
> From: Amar Tumballi (bulde) <amar at gluster.com>
> To: Craig Flockhart <craigflockhart at yahoo.com>
> Cc: gluster-users at gluster.org
> Sent: Friday, February 6, 2009 2:37:08 PM
> Subject: Re: [Gluster-users] starting 4th node in 4 node dht cluster fails
>
> Hi Craig,
> As you are using 'distribute' (client side) over 'distribute' (server
> side), this will not be working right now. To get it working right now, you
> can have 4 export volumes from each server exported, and in client have 4x4
> client protocol volumes, which you can aggregate with a single
> 'cluster/distribute' (which will have 16 subvolumes).
>
> To get the below mentioned configuration working as is, you need to wait for
> a week more IMO.
>
> Regards,
> Amar
>
> 2009/2/6 Craig Flockhart <craigflockhart at yahoo.com>
>>
>> Using dht translator to cluster together 4 nodes each with 4 disks.
>> Starting glusterfs on the 4th causes "Structure needs cleaning" when
>> ls-ing the mount point on any of them. It's fine with 3 nodes started.
>> Using fuse-2.7.4
>> GlusterFS 2.0.0rc1
>> Linux 2.6.18-53.el5 kernel
>>
>> Errors from the log:
>>
>>
>> 2009-02-06 15:23:51 E [dht-layout.c:460:dht_layout_normalize] dist1: found
>> anomalies in /. holes=1 overlaps=3
>> 2009-02-06 15:23:51 W [dht-common.c:137:dht_lookup_dir_cbk] dist1: fixing
>> assignment on /
>> 2009-02-06 15:23:51 E [dht-selfheal.c:422:dht_selfheal_directory] dist1:
>> the directory is not a virgin
>> 2009-02-06 15:23:51 W [fuse-bridge.c:297:need_fresh_lookup] fuse-bridge:
>> revalidate of / failed (Structure needs cleaning)
>> 2009-02-06 15:23:51 E [dht-layout.c:460:dht_layout_normalize] dist1: found
>> anomalies in /. holes=1 overlaps=3
>> 2009-02-06 15:23:51 W [dht-common.c:137:dht_lookup_dir_cbk] dist1: fixing
>> assignment on /
>> 2009-02-06 15:23:51 E [dht-selfheal.c:422:dht_selfheal_directory] dist1:
>> the directory is not a virgin
>> 2009-02-06 15:23:51 E [fuse-bridge.c:404:fuse_entry_cbk] glusterfs-fuse:
>> 2: LOOKUP() / => -1 (Structure needs cleaning)
>>
>> Config for one of the machines:
>>
>> volume posix-d1
>> type storage/posix
>> option directory /mnt/chard1/export
>> end-volume
>>
>> volume locks1
>> type features/locks
>> subvolumes posix-d1
>> end-volume
>>
>>
>> volume posix-d2
>> type storage/posix
>> option directory /mnt/chard2/export
>> end-volume
>>
>>
>> volume locks2
>> type features/locks
>> subvolumes posix-d2
>> end-volume
>>
>>
>> volume posix-d3
>> type storage/posix
>> option directory /mnt/chard3/export
>> end-volume
>>
>> volume locks3
>> type features/locks
>> subvolumes posix-d3
>> end-volume
>>
>>
>> volume posix-d4
>> type storage/posix
>> option directory /mnt/chard4/export
>> end-volume
>>
>> volume locks4
>> type features/locks
>> subvolumes posix-d4
>> end-volume
>>
>> volume home-ns
>> type storage/posix
>> option directory /var/local/glusterfs/namespace1
>> end-volume
>>
>> volume home
>> type cluster/distribute
>> subvolumes locks1 locks2 locks3 locks4
>> end-volume
>>
>> volume server
>> type protocol/server
>> option transport-type tcp
>> subvolumes home
>> option auth.addr.home.allow *
>> end-volume
>>
>>
>> volume zwei
>> type protocol/client
>> option transport-type tcp
>> option remote-host zwei
>> option remote-subvolume home
>> end-volume
>>
>> volume char
>> type protocol/client
>> option transport-type tcp
>> option remote-host char
>> option remote-subvolume home
>> end-volume
>>
>> volume pente
>> type protocol/client
>> option transport-type tcp
>> option remote-host pente
>> option remote-subvolume home
>> end-volume
>>
>> volume tres
>> type protocol/client
>> option transport-type tcp
>> option remote-host tres
>> option remote-subvolume home
>> end-volume
>>
>> volume dist1
>> type cluster/distribute
>> subvolumes pente char tres zwei
>> end-volume
>>
>>
>>
>>
>>
>> _______________________________________________
>> Gluster-users mailing list
>> Gluster-users at gluster.org
>> http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users
>>
>
>
>
> --
> Amar Tumballi
> Gluster/GlusterFS Hacker
> [bulde on #gluster/irc.gnu.org]
> http://www.zresearch.com - Commoditizing Super Storage!
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20090209/b010d905/attachment.html>
More information about the Gluster-users
mailing list