[Gluster-users] starting 4th node in 4 node dht cluster fails

Craig Flockhart craigflockhart at yahoo.com
Fri Feb 6 23:11:11 UTC 2009


Hi Amar,
Thanks for the quick reply, but that doesn't work either. I just get more holes and overlaps:

2009-02-06 15:04:04 E [dht-layout.c:460:dht_layout_normalize] dist1: found anomalies in /. holes=3 overlaps=9
2009-02-06 15:04:04 W [dht-common.c:137:dht_lookup_dir_cbk] dist1: fixing assignment on /
2009-02-06 15:04:04 E [dht-selfheal.c:422:dht_selfheal_directory] dist1: the directory is not a virgin
2009-02-06 15:04:04 W [fuse-bridge.c:297:need_fresh_lookup] fuse-bridge: revalidate of / failed (Structure needs cleaning)
2009-02-06 15:04:04 E [dht-layout.c:460:dht_layout_normalize] dist1: found anomalies in /. holes=3 overlaps=9
2009-02-06 15:04:04 W [dht-common.c:137:dht_lookup_dir_cbk] dist1: fixing assignment on /
2009-02-06 15:04:04 E [dht-selfheal.c:422:dht_selfheal_directory] dist1: the directory is not a virgin
2009-02-06 15:04:04 E [fuse-bridge.c:404:fuse_entry_cbk] glusterfs-fuse: 2: LOOKUP() / => -1 (Structure needs cleaning)
2009-02-06 15:04:04 E [dht-layout.c:460:dht_layout_normalize] dist1: found anomalies in /. holes=3 overlaps=9
2009-02-06 15:04:04 W [dht-common.c:137:dht_lookup_dir_cbk] dist1: fixing assignment on /
2009-02-06 15:04:04 E [dht-selfheal.c:422:dht_selfheal_directory] dist1: the directory is not a virgin
2009-02-06 15:04:04 W [fuse-bridge.c:297:need_fresh_lookup] fuse-bridge: revalidate of / failed (Structure needs cleaning)
2009-02-06 15:04:04 E [dht-layout.c:460:dht_layout_normalize] dist1: found anomalies in /. holes=3 overlaps=9
2009-02-06 15:04:04 W [dht-common.c:137:dht_lookup_dir_cbk] dist1: fixing assignment on /
2009-02-06 15:04:04 E [dht-selfheal.c:422:dht_selfheal_directory] dist1: the directory is not a virgin
2009-02-06 15:04:04 E [fuse-bridge.c:404:fuse_entry_cbk] glusterfs-fuse: 3: LOOKUP() / => -1 (Structure needs cleaning)



________________________________
From: Amar Tumballi (bulde) <amar at gluster.com>
To: Craig Flockhart <craigflockhart at yahoo.com>
Cc: gluster-users at gluster.org
Sent: Friday, February 6, 2009 2:37:08 PM
Subject: Re: [Gluster-users] starting 4th node in 4 node dht cluster fails

Hi Craig,
 As you are using 'distribute' (client side) over 'distribute' (server side), this will not be working right now. To get it working right now, you can have 4 export volumes from each server exported, and in client have 4x4 client protocol volumes, which you can aggregate with a single 'cluster/distribute' (which will have 16 subvolumes).

To get the below mentioned configuration working as is, you need to wait for a week more IMO.

Regards,
Amar


2009/2/6 Craig Flockhart <craigflockhart at yahoo.com>

Using dht translator to cluster together 4 nodes each with 4 disks.
Starting glusterfs on the 4th causes "Structure needs cleaning" when ls-ing the mount point on any of them. It's fine with 3 nodes started.
Using fuse-2.7.4
GlusterFS 2.0.0rc1
Linux 2.6.18-53.el5 kernel

Errors from the log:


2009-02-06 15:23:51 E [dht-layout.c:460:dht_layout_normalize] dist1: found anomalies in /. holes=1 overlaps=3
2009-02-06 15:23:51 W [dht-common.c:137:dht_lookup_dir_cbk] dist1: fixing assignment on /
2009-02-06 15:23:51 E [dht-selfheal.c:422:dht_selfheal_directory] dist1: the directory is not a virgin
2009-02-06 15:23:51 W [fuse-bridge.c:297:need_fresh_lookup] fuse-bridge: revalidate of / failed (Structure needs cleaning)
2009-02-06 15:23:51 E [dht-layout.c:460:dht_layout_normalize] dist1: found anomalies in /. holes=1 overlaps=3
2009-02-06 15:23:51 W [dht-common.c:137:dht_lookup_dir_cbk] dist1: fixing assignment on /
2009-02-06 15:23:51 E [dht-selfheal.c:422:dht_selfheal_directory] dist1: the directory is not a virgin
2009-02-06 15:23:51 E [fuse-bridge.c:404:fuse_entry_cbk] glusterfs-fuse: 2: LOOKUP() / => -1 (Structure needs cleaning)

Config for one of the machines:

volume posix-d1
 type storage/posix
 option directory /mnt/chard1/export
end-volume

volume locks1
  type features/locks
  subvolumes posix-d1 
end-volume


volume posix-d2
 type storage/posix
 option directory /mnt/chard2/export
end-volume


volume locks2
  type features/locks
  subvolumes posix-d2 
end-volume


volume posix-d3
 type storage/posix
 option directory /mnt/chard3/export
end-volume

volume locks3
  type features/locks
  subvolumes posix-d3 
end-volume


volume posix-d4
 type storage/posix
 option directory /mnt/chard4/export
end-volume

volume locks4
  type features/locks
  subvolumes posix-d4 
end-volume

volume home-ns
 type storage/posix
 option directory /var/local/glusterfs/namespace1
end-volume

volume home
 type cluster/distribute
 subvolumes locks1 locks2 locks3 locks4
end-volume

volume server
 type protocol/server
 option transport-type tcp
 subvolumes home
 option auth.addr.home.allow *
end-volume


volume zwei
      type protocol/client
      option transport-type tcp
      option remote-host zwei
      option remote-subvolume home
end-volume

volume char
      type protocol/client
      option transport-type tcp
      option remote-host char
      option remote-subvolume home
end-volume

volume pente
      type protocol/client
      option transport-type tcp
      option remote-host pente
      option remote-subvolume home
end-volume

volume tres
      type protocol/client
      option transport-type tcp
      option remote-host tres
      option remote-subvolume home
end-volume

volume dist1
    type cluster/distribute
    subvolumes pente char tres zwei
end-volume





_______________________________________________
Gluster-users mailing list
Gluster-users at gluster.org
http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users




-- 
Amar Tumballi
Gluster/GlusterFS Hacker
[bulde on #gluster/irc.gnu.org]
http://www.zresearch.com - Commoditizing Super Storage!
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20090206/7c28b81b/attachment.html>


More information about the Gluster-users mailing list