[Gluster-users] R: SPOF question

Roberto Lucignani roberto.lucignani at caleidos.it
Tue May 25 23:21:13 UTC 2010


Hi Bala,
thank you very much for your reply. Yes I tried both methods, one building a
vol file and one retrieving the configuration from the server and both work
fine

However my problem do not regards the mount process but it regards the spof
represented by node01. If it is unavailable I can't access my volume.

Why this ? and why it happens only with the first node ?

Regards
M4dG

-----Messaggio originale-----
Da: Bala.JA [mailto:bala at gluster.com] 
Inviato: domenica 23 maggio 2010 20.58
A: roberto.lucignani at caleidos.it
Cc: gluster-users at gluster.org
Oggetto: Re: [Gluster-users] SPOF question


Hi Roberto,

Gluster Storage Platform provides client volume spec file through server for

created volumes.

You can mount using

mount -t glusterfs <server>:<volume>-<transport> <your-mount-point>

for example,
mount -t glusterfs node01:gluster01-tcp /mnt/gluster01

Its not required to write your own spec file for mounting it.

Thanks,

Regards,
Bala



Roberto Lucignani wrote:
> Hi all,
> 
> I installed two Gluster Storage Platorm 3.0.4 on two servers node01 e
> node02.
> 
> I created a volume called gluster01 than I mounted it on a Debian box in
> this way:
> 
>  
> 
> mount -t glusterfs /etc/glusterfs/gluster01-tcp.vol /mnt/gluster01/
> 
>  
> 
> the gluster01-tcp.vol is the following:
> 
>  
> 
>  
> 
> volume 192.168.0.200-1
> 
>     type protocol/client
> 
>     option transport-type tcp
> 
>     option remote-host 192.168.0.200
> 
>     option transport.socket.nodelay on
> 
>     option transport.remote-port 10012
> 
>     option remote-subvolume brick1
> 
> end-volume
> 
>  
> 
> volume 192.168.0.200-2
> 
>     type protocol/client
> 
>     option transport-type tcp
> 
>     option remote-host 192.168.0.200
> 
>     option transport.socket.nodelay on
> 
>     option transport.remote-port 10012
> 
>     option remote-subvolume brick2
> 
> end-volume
> 
>  
> 
> volume 192.168.0.201-1
> 
>     type protocol/client
> 
>     option transport-type tcp
> 
>     option remote-host 192.168.0.201
> 
>     option transport.socket.nodelay on
> 
>     option transport.remote-port 10012
> 
>     option remote-subvolume brick1
> 
> end-volume
> 
>  
> 
> volume 192.168.0.201-2
> 
>     type protocol/client
> 
>     option transport-type tcp
> 
>     option remote-host 192.168.0.201
> 
>     option transport.socket.nodelay on
> 
>     option transport.remote-port 10012
> 
>     option remote-subvolume brick2
> 
> end-volume
> 
>  
> 
> volume mirror-0
> 
>     type cluster/replicate
> 
>     subvolumes 192.168.0.201-1 192.168.0.200-1
> 
> end-volume
> 
>  
> 
> volume mirror-1
> 
>     type cluster/replicate
> 
>     subvolumes 192.168.0.201-2 192.168.0.200-2
> 
> end-volume
> 
>  
> 
> volume distribute
> 
>     type cluster/distribute
> 
>     subvolumes mirror-0 mirror-1
> 
> end-volume
> 
>  
> 
> volume readahead
> 
>     type performance/read-ahead
> 
>     option page-count 4
> 
>     subvolumes distribute
> 
> end-volume
> 
>  
> 
> volume iocache
> 
>     type performance/io-cache
> 
>     option cache-size `echo $(( $(grep 'MemTotal' /proc/meminfo | sed
> 's/[^0-9]//g') / 5120 ))`MB
> 
>     option cache-timeout 1
> 
>     subvolumes readahead
> 
> end-volume
> 
>  
> 
> volume quickread
> 
>     type performance/quick-read
> 
>     option cache-timeout 1
> 
>     option max-file-size 64kB
> 
>     subvolumes iocache
> 
> end-volume
> 
>  
> 
> volume writebehind
> 
>     type performance/write-behind
> 
>     option cache-size 4MB
> 
>     subvolumes quickread
> 
> end-volume
> 
>  
> 
> volume statprefetch
> 
>     type performance/stat-prefetch
> 
>     subvolumes writebehind
> 
> end-volume
> 
>  
> 
>  
> 
>  
> 
> all works fine and smooth, I can write and read on that volume without any
> problem.
> 
> The problem is when the node01 is unavailable, I can't access the volume
via
> mount on the Debian box. This doesn't happen if it is the node02 to be
> unavailable.
> 
> I expected the same behavior in the two cases, in this way the node01
> represents an SPOF, am I wrong ? am I missing something the configuration
?
> 
>  
> 
> Tnx in advance
> 
> Rpberto
> 
> 
> 
> 
> ------------------------------------------------------------------------
> 
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users







More information about the Gluster-users mailing list