[Gluster-users] SPOF question
Roberto Lucignani
roberto.lucignani at caleidos.it
Sun May 23 14:17:57 UTC 2010
Hi all,
I installed two Gluster Storage Platorm 3.0.4 on two servers node01 e
node02.
I created a volume called gluster01 than I mounted it on a Debian box in
this way:
mount -t glusterfs /etc/glusterfs/gluster01-tcp.vol /mnt/gluster01/
the gluster01-tcp.vol is the following:
volume 192.168.0.200-1
type protocol/client
option transport-type tcp
option remote-host 192.168.0.200
option transport.socket.nodelay on
option transport.remote-port 10012
option remote-subvolume brick1
end-volume
volume 192.168.0.200-2
type protocol/client
option transport-type tcp
option remote-host 192.168.0.200
option transport.socket.nodelay on
option transport.remote-port 10012
option remote-subvolume brick2
end-volume
volume 192.168.0.201-1
type protocol/client
option transport-type tcp
option remote-host 192.168.0.201
option transport.socket.nodelay on
option transport.remote-port 10012
option remote-subvolume brick1
end-volume
volume 192.168.0.201-2
type protocol/client
option transport-type tcp
option remote-host 192.168.0.201
option transport.socket.nodelay on
option transport.remote-port 10012
option remote-subvolume brick2
end-volume
volume mirror-0
type cluster/replicate
subvolumes 192.168.0.201-1 192.168.0.200-1
end-volume
volume mirror-1
type cluster/replicate
subvolumes 192.168.0.201-2 192.168.0.200-2
end-volume
volume distribute
type cluster/distribute
subvolumes mirror-0 mirror-1
end-volume
volume readahead
type performance/read-ahead
option page-count 4
subvolumes distribute
end-volume
volume iocache
type performance/io-cache
option cache-size `echo $(( $(grep 'MemTotal' /proc/meminfo | sed
's/[^0-9]//g') / 5120 ))`MB
option cache-timeout 1
subvolumes readahead
end-volume
volume quickread
type performance/quick-read
option cache-timeout 1
option max-file-size 64kB
subvolumes iocache
end-volume
volume writebehind
type performance/write-behind
option cache-size 4MB
subvolumes quickread
end-volume
volume statprefetch
type performance/stat-prefetch
subvolumes writebehind
end-volume
all works fine and smooth, I can write and read on that volume without any
problem.
The problem is when the node01 is unavailable, I can't access the volume via
mount on the Debian box. This doesn't happen if it is the node02 to be
unavailable.
I expected the same behavior in the two cases, in this way the node01
represents an SPOF, am I wrong ? am I missing something the configuration ?
Tnx in advance
Rpberto
More information about the Gluster-users
mailing list