[Gluster-users] glusterfs 2.0.4 hangs with server side DHT
Anand Avati
avati at gluster.com
Fri Jul 31 19:44:53 UTC 2009
Wei,
In the 2.0.x releases you cannot cascade multiple Distributes. In
your setup you have a Distribute on the server and on the client. You
can have only one Distribute in your config. This will be fixed in
future releases. In the current release, though not very optimal, as a
temporary workaround, you could try replacing the Distribute in the
server config with a unify instead.
Avati
On Fri, Jul 31, 2009 at 12:03 PM, Wei Dong<wdong.pku at gmail.com> wrote:
> Hi,
>
> When I try to aggregate four disks from the server and expose a single DHT
> volume, glusterfs hangs at the client side. In some of the server side logs
> I got
>
> [2009-07-31 14:48:31] E [dht-common.c:2028:dht_statfs] union: invalid
> argument: loc->inode
> [2009-07-31 14:48:31] E [dht-common.c:2028:dht_statfs] union: invalid
> argument: loc->inode
>
> At the same time the server exports another local directory without server
> side DHT and that is working correctly. Could anyone help me please?
>
> My server volume file is:
>
> volume posix1
> type storage/posix
> option directory /state/partition1/fuse/glusterfs
> end-volume
>
> volume posix2
> type storage/posix
> option directory /state/partition2/fuse/glusterfs
> end-volume
>
> volume posix3
> type storage/posix
> option directory /state/partition3/fuse/glusterfs
> end-volume
>
> volume posix4
> type storage/posix
> option directory /state/partition4/fuse/glusterfs
> end-volume
>
> volume locks1
> type features/locks
> subvolumes posix1
> end-volume
>
> volume locks2
> type features/locks
> subvolumes posix2
> end-volume
>
> volume locks3
> type features/locks
> subvolumes posix3
> end-volume
>
> volume locks4
> type features/locks
> subvolumes posix4
> end-volume
>
> volume union
> type cluster/distribute
> subvolumes locks1 locks2 locks3 locks4
> end-volume
>
> volume brick
> type performance/io-threads
> option thread-count 16
> subvolumes union
> end-volume
>
> volume server
> type protocol/server
> option transport-type tcp
> option auth.addr.brick.allow 192.168.99.*
> subvolumes brick
> end-volume
>
> And client side volume file is:
>
> volume brick-0-0
> type protocol/client
> option transport-type tcp
> option remote-host c8-0-0
> option remote-subvolume brick
> end-volume
>
> ... (repeat 65 time for different nodes)
>
> volume rep-0
> type cluster/replicate
> subvolumes brick-0-0 brick-1-0 brick-0-22
> end-volume
>
> ... (repeat 22 times)
>
> volume brick
> type cluster/distribute
> subvolumes rep-0 rep-1 rep-2 rep-3 rep-4 rep-5 rep-6 rep-7 rep-8 rep-9
> rep-10 rep-11 rep-12 rep-13 rep-14 rep-15 rep-16 rep-17 rep-18 rep-19 rep-20
> rep-21
> end-volume
>
> volume client
> type performance/write-behind
> option cache-size 32MB
> option flush-behind on
> subvolumes brick
> end-volume
>
> Best regards,
>
> Wei Dong
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
>
More information about the Gluster-users
mailing list