[Gluster-users] Two nodes both as server+client

Daniel Jordan Bambach dan at lateral.net
Wed Jun 18 11:46:31 UTC 2008


Oh, Amar.. thankyou!
connected straight away!

Many thanks.

On 18 Jun 2008, at 05:58, Amar S. Tumballi wrote:

> Hi Daniel,
>  I have fixed a bug in mainline-2.5 branch (from which 1.3.x  
> releases are made), which addresses this issue. Let me know if you  
> can try out the latest patchset from tla. Or you can use the below  
> link too. (note the same version number, you need to make sure you  
> did 'make uninstall', or 'rpm -e glusterfs' before installing this.
>
> http://gnu.zresearch.com/~amar/qa-releases/glusterfs-1.3.9.tar.gz
>
> Regards,
>
> On Tue, Jun 10, 2008 at 8:44 AM, Daniel Jordan Bambach <dan at lateral.net 
> > wrote:
> Thanks to Anand, I have some serious speed ups on local machine  
> performance, by combining the server and client within the client  
> config. This removed a few overheads, and both write and read speeds  
> are up on each individual machine
>
> However - using the attached spec files, neither server is able to  
> connect to the other, and I am stumped as to why, each log file  
> reads the equiv of:
>
> 2008-06-10 13:07:32 E [tcp-client.c:171:tcp_connect] latsrv2-local:  
> non-blocking connect() returned: 111 (Connection refused)
>
> This simply looks like there is no protocol/sever to connect to for  
> the other client.
>
> Is anyone able to spot a howler in here, or is it something more  
> fundamental?
>
> P.S. Apologies to Anand for sending this to you twice!
>
> The two client specs are here (as there are no longer any server  
> specs!)
>
> LATSRV1:
> #Start with server defs in our client conf.
> #We can save on the overhead of a seperate glusterfsd
> #Becuase we are always running a server+client pair
>
> volume posix
>  type storage/posix
>  option directory /home/export2
> end-volume
>
> volume plocks
>   type features/posix-locks
>   subvolumes posix
> end-volume
>
> volume latsrv1-local
>  type performance/io-threads
>  option thread-count 8
>  option cache-size 64MB
>  subvolumes plocks
> end-volume
>
> volume server
>  type protocol/server
>  option transport-type tcp/server	# For TCP/IP transport
>  option auth.ip.latsrv1-local.allow *		# Allow access to "brick"  
> volume
>  subvolumes latsrv1-local
> end-volume
>
> #Continue with the client spec..
>
> volume latsrv2-local
>  type protocol/client
>  option transport-type tcp/client
>  option remote-host latsrv2
> # option remote-subvolume 195.224.189.148 #fake to model unavailable  
> server
>  option remote-subvolume latsrv2-local
> end-volume
>
> volume data-afr
>  type cluster/afr
>  subvolumes latsrv1-local latsrv2-local
>  option read-subvolume latsrv1-local
>  option self-heal on
> end-volume
>
> volume data
>  type performance/read-ahead
>  option page-size 128kB		# 256KB is the default option
>  option page-count 4			# 2 is default option
>  option force-atime-update off	# default is off
>  subvolumes data-afr
> end-volume
>
> #we will mount the volume data.
>
>
>
> LATSRV2:
> #Start with server defs in our client conf.
> #We can save on the overhead of a seperate glusterfsd
> #Becuase we are always running a server+client pair
>
> volume posix
>  type storage/posix
>  option directory /home/export2
> end-volume
>
> volume plocks
>   type features/posix-locks
>   subvolumes posix
> end-volume
>
> volume latsrv2-local
>  type performance/io-threads
>  option thread-count 8
>  option cache-size 64MB
>  subvolumes plocks
> end-volume
>
> volume server
>  type protocol/server
>  option transport-type tcp/server	# For TCP/IP transport
>  option auth.ip.latsrv2-local.allow *		# Allow access to "brick"  
> volume
>  subvolumes latsrv2-local
> end-volume
>
> #Continue with the client spec..
>
> volume latsrv1-local
>  type protocol/client
>  option transport-type tcp/client
>  option remote-host latsrv1
> # option remote-subvolume 195.224.189.148 #fake to model unavailable  
> server
>  option remote-subvolume latsrv1-local
> end-volume
>
> volume data-afr
>  type cluster/afr
>  subvolumes latsrv1-local latsrv2-local
>  option read-subvolume latsrv2-local
>  option self-heal on
> end-volume
>
> volume data
>  type performance/read-ahead
>  option page-size 128kB		# 256KB is the default option
>  option page-count 4			# 2 is default option
>  option force-atime-update off	# default is off
>  subvolumes data-afr
> end-volume
>
> #we will mount the volume data.
>
>
>
> On 5 Jun 2008, at 20:51, Anand Babu Periasamy wrote:
>
>> There is lot of scope for improvement in performance and simplicity.
>>
>> Booster translator will help only when you LD_PRELOAD glusterfs- 
>> booster.so
>> before launching your applications. It bypasses kernel-fuse for  
>> reads and
>> writes. Even in that case, it makes sense to load booster  
>> translator on
>> the
>> server side.
>>
>> In your setup, you have 2 servers acting as complete mirror for  
>> each other
>> (server and client for each other). You can merge client and server  
>> into
>> one process by loading protocol/server into the client space. It  
>> will be a
>> lot simpler and faster. Just 2 vol spec files.
>>
>> In upcoming 1.4, you will also be able to use the web embeddable  
>> glusterfs
>> client to directly access the storage from apache addresses space  
>> (or even
>> run the whole file system inside apache or lighttpd). It also has  
>> binary
>> protocol (fast and efficient) and non-blocking I/O functionalities.
>>
>> Please see the attached PDF. It will give you a good idea.
>>
>> --
>> Anand Babu Periasamy
>> GPG Key ID: 0x62E15A31
>> Blog [http://ab.freeshell.org]
>> The GNU Operating System [http://www.gnu.org]
>> Z RESEARCH Inc [http://www.zresearch.com]
>>
>>
>>
>> Daniel Jordan Bambach wrote:
>>> Hiya all..
>>>
>>> A scenario that seems to be a very neat solution to a basic high
>> availability Webserver set up (Apache, Mysql, Python+Django) is to  
>> set
>> up two machines, configure master<->master replication between the  
>> two
>> MySQL databases, and then set up GlusterFS to mirror a filesystem
>> between the machine that carries the Apache config, Django
>>> applications, and file upload folders between the machines. You can
>> pull the plug on either, and things should keep running on the other.
>>>
>>> With this in mind, I have set up an arrangement whereby each box  
>>> runs
>> GlusterFSD, and has a client running on them that connects the local
>> server. AFR is set up at the server level, so that perhaps when/if  
>> the
>> other machine goes down, the client happily carries on dealing with
>> read/ write requests while the server deals with the non-existence of
>> the other server.
>>>
>>> I've set this up in a test environment, and all is working peachy,  
>>> and
>> we are thinking of moving to deploy this to a new production
>>> environment.
>>>
>>> With this in mind, I wanted to poll the collective knowledge of this
>> list to see if there are any gotchas to this set up I might have
>> missed, or any obvious performance features I should be using that I
>> am not.
>>>
>>> Any help or advise would be greatly appreciated!!
>>>
>>> Here are the current server and client configs for the two machines:
>>>
>>>
>>>
>>
>>> _______________________________________________
>>> Gluster-users mailing list
>>> Gluster-users at gluster.org
>>> http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users
>>>
>>
>>
>> <GlusterFS-Layout.pdf><GlusterFS-Layout.odp>
>
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users
>
>
>
>
> -- 
> Amar Tumballi
> Gluster/GlusterFS Hacker
> [bulde on #gluster/irc.gnu.org]
> http://www.zresearch.com - Commoditizing Super Storage!

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20080618/18c39f70/attachment.html>


More information about the Gluster-users mailing list