[Gluster-users] Two nodes both as server+client

Anand Babu Periasamy ab at zresearch.com
Thu Jun 5 19:51:28 UTC 2008


There is lot of scope for improvement in performance and simplicity.

Booster translator will help only when you LD_PRELOAD glusterfs-booster.so
before launching your applications. It bypasses kernel-fuse for reads and
writes. Even in that case, it makes sense to load booster translator on
the
server side.

In your setup, you have 2 servers acting as complete mirror for each other
(server and client for each other). You can merge client and server into
one process by loading protocol/server into the client space. It will be a
lot simpler and faster. Just 2 vol spec files.

In upcoming 1.4, you will also be able to use the web embeddable glusterfs
client to directly access the storage from apache addresses space (or even
run the whole file system inside apache or lighttpd). It also has binary
protocol (fast and efficient) and non-blocking I/O functionalities.

Please see the attached PDF. It will give you a good idea.

--
Anand Babu Periasamy
GPG Key ID: 0x62E15A31
Blog [http://ab.freeshell.org]
The GNU Operating System [http://www.gnu.org]
Z RESEARCH Inc [http://www.zresearch.com]



Daniel Jordan Bambach wrote:
> Hiya all..
>
> A scenario that seems to be a very neat solution to a basic high  
availability Webserver set up (Apache, Mysql, Python+Django) is to set  
up two machines, configure master<->master replication between the two  
MySQL databases, and then set up GlusterFS to mirror a filesystem  
between the machine that carries the Apache config, Django
> applications, and file upload folders between the machines. You can  
pull the plug on either, and things should keep running on the other.
>
> With this in mind, I have set up an arrangement whereby each box runs  
GlusterFSD, and has a client running on them that connects the local  
server. AFR is set up at the server level, so that perhaps when/if the  
other machine goes down, the client happily carries on dealing with  
read/ write requests while the server deals with the non-existence of  
the other server.
>
> I've set this up in a test environment, and all is working peachy, and  
we are thinking of moving to deploy this to a new production
> environment.
>
> With this in mind, I wanted to poll the collective knowledge of this  
list to see if there are any gotchas to this set up I might have  
missed, or any obvious performance features I should be using that I  
am not.
>
> Any help or advise would be greatly appreciated!!
>
> Here are the current server and client configs for the two machines:
>
> #common client config
> volume initial
> 	type protocol/client
> 	option transport-type tcp/client
> 	option remote-host localhost
> 	option remote-subvolume data
> end-volume
>
> volume readahead
> 	type performance/read-ahead
> 	option page-size 128kB		# 256KB is the default option
> 	option page-count 4			# 2 is default option
> 	option force-atime-update off	# default is off
> 	subvolumes initial
> end-volume
>
> volume data
> 	type performance/booster
> 	subvolumes readahead
> end-volume
>
> #latsrv1 - server config for box 1
> volume posix
>   type storage/posix                   # POSIX FS translator
>   option directory /home/export        # Export this directory
> end-volume
>
> volume brick-latsrv1
> type features/posix-locks
> subvolumes posix
> end-volume
>
> volume brick-latsrv2
>   type protocol/client
>   option transport-type tcp/client
>   option remote-host latsrv2
>   option remote-subvolume brick-latsrv2
> end-volume
>
> volume brick-afr
> type cluster/afr
> subvolumes brick-latsrv1 brick-latsrv2
> option read-subvolume brick-latsrv1
> end-volume
>
> volume data
> type performance/io-threads
> option thread-count 8
> option cache-size 64MB
> subvolumes brick-afr
> end-volume
>
> volume server
>   type protocol/server
>   option transport-type tcp/server	# For TCP/IP transport
>   option auth.ip.data.allow *		# Allow access to "brick" volume
>   option auth.ip.brick-latsrv1.allow *
>   subvolumes data brick-latsrv1
> end-volume
>
> #latsrv2 - server config for box 2
> volume posix
>   type storage/posix                   # POSIX FS translator
>   option directory /home/export        # Export this directory
> end-volume
>
> volume brick-latsrv2
> type features/posix-locks
> subvolumes posix
> end-volume
>
> volume brick-latsrv1
>   type protocol/client
>   option transport-type tcp/client
>   option remote-host latsrv1
>   option remote-subvolume brick-latsrv1
> end-volume
>
> volume brick-afr
> 	type cluster/afr
> 	subvolumes brick-latsrv1 brick-latsrv2
> 	option read-subvolume brick-latsrv2
> end-volume
>
> volume data
> 	type performance/io-threads
> 	option thread-count 8
> 	option cache-size 64MB
> 	subvolumes brick-afr
> end-volume
>
> volume server
>   type protocol/server
>   option transport-type tcp/server	# For TCP/IP transport
>   option auth.ip.data.allow *		# Allow access to "brick" volume
>   option auth.ip.brick-latsrv2.allow *
>   subvolumes data brick-latsrv2
> end-volume
>
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users
>


-------------- next part --------------
A non-text attachment was scrubbed...
Name: GlusterFS-Layout.pdf
Type: application/pdf
Size: 40994 bytes
Desc: not available
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20080605/a2c18338/attachment.pdf>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: GlusterFS-Layout.odp
Type: application/vnd.oasis.opendocument.presentation
Size: 17726 bytes
Desc: not available
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20080605/a2c18338/attachment.odp>


More information about the Gluster-users mailing list