[Gluster-users] Gluster FS replication

Haris Zukanovic haris.zukanovic74 at gmail.com
Mon Oct 22 09:26:06 UTC 2012


Thank you for your answer...
Does using the NFS client insure replication to all bricks? My problem 
is that I see Gluster has "unfinished" replication tasks that lie 
around. Seems like the Gluster needs an external trigger to like "ls -l" 
on the file in question to re-trigger and complete the replication if it 
had failed (temporarily) for any reason.

I have solved the problem of making the application read from the "local 
brick" by mounting the brick locally with -bind as read-only and making 
my application separate reads from writes with different filesystem paths.




On 21/10/12 23.18, Israel Shirk wrote:
> Haris, try the NFS mount.  Gluster typically triggers healing through 
> the client, so if you skip the client, nothing heals.
>
> The native Gluster client tends to be really @#$@#$@# stupid.  It'll 
> send reads to Singapore while you're in Virginia (and there are bricks 
> 0.2ms away), then when healing is needed it will take a bunch of time 
> to do that, all the while it's blocking your application or web 
> server, which under heavy loads will cause your entire application to 
> buckle.
>
> The NFS client is dumb, which in my mind is a lot better - it'll just 
> do what you tell it to do and allow you to compensate for connectivity 
> issues yourself using something like Linux-HA.
>
> You have to keep in mind when using gluster that 99% of the people 
> using it are running their tests on a single server (see the recent 
> notes about how testing patches are only performed on a single 
> server), and most applications don't distribute or mirror to bricks 
> more than a few hundred yards away.  Their idea of geo-replication is 
> that you send your writes to the other side of the world (which may or 
> may not be up at the moment), then twiddle your thumbs for a while and 
> hope it gets back to you.  So, that said, it's possible to get it to 
> work, and it's almost better than lsyncd, but it'll still make you cry 
> periodically.
>
> Ok, back to happy time :)
>
>     Hi everyone,
>
>     I am using Gluster in replication mode.
>     Have 3 bricks on 3 different physical servers connected with WAN. This
>     makes writing but also reading files from Gluster mounted volume
>     very slow.
>     To remedy this I have made my web application read Gluster files from
>     the brick directly (I make a readonly bind mount of the brick), but
>     write to the Gluster FS mounted volume so that the files will
>     instantly
>     replicate on all 3 servers. At least, "instant replication" is what I
>     envision GLuster will do for me :)
>
>     My problem is that files sometimes do not replicate to all 3 servers
>     instantly. There are certainly short network outages which may prevent
>     instant replication and I have situations like this:
>
>     ssh web1-prod ls -l
>     /home/gluster/r/production/zdrava.tv/web/uploads/24/playlist/38/19_10_2012_18.js
>     <http://zdrava.tv/web/uploads/24/playlist/38/19_10_2012_18.js>
>     -rw-r--r-- 1 apache apache 75901 Oct 19 18:00
>     /home/gluster/r/production/zdrava.tv/web/uploads/24/playlist/38/19_10_2012_18.js
>     <http://zdrava.tv/web/uploads/24/playlist/38/19_10_2012_18.js>
>     web2-prod.
>     ssh web2-prod ls -l
>     /home/gluster/r/production/zdrava.tv/web/uploads/24/playlist/38/19_10_2012_18.js
>     <http://zdrava.tv/web/uploads/24/playlist/38/19_10_2012_18.js>
>     -rw-r--r-- 1 apache apache 0 Oct 19 18:00
>     /home/gluster/r/production/zdrava.tv/web/uploads/24/playlist/38/19_10_2012_18.js
>     <http://zdrava.tv/web/uploads/24/playlist/38/19_10_2012_18.js>
>     web3-prod.
>     ssh web3-prod ls -l
>     /home/gluster/r/production/zdrava.tv/web/uploads/24/playlist/38/19_10_2012_18.js
>     <http://zdrava.tv/web/uploads/24/playlist/38/19_10_2012_18.js>
>     -rw-r--r--. 1 apache apache 75901 Oct 19 18:00
>     /home/gluster/r/production/zdrava.tv/web/uploads/24/playlist/38/19_10_2012_18.js
>     <http://zdrava.tv/web/uploads/24/playlist/38/19_10_2012_18.js>
>
>     Where the file on web2 server brick has a size of 0. So serving this
>     file from web2 makes my application make errors..
>
>     I have had a brain-split situation couple of times and resolved
>     manually. The above kind of situation is not a brain-split and
>     resolves
>     and (re-)replicates completly with a simple "ls -l" on the file in
>     question from any of the servers.
>
>     My question is:
>     I suppose that the problem here is incomplete replication for the file
>     in question due to temporary network problems.
>     How to insure the complete replication immediatly after the
>     network has
>     been restored?
>
>
>     kind regards
>     Haris Zukanovic
>
>     --
>     --
>     Haris Zukanovic
>

-- 
--
Haris Zukanovic

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20121022/5dabd1ee/attachment.html>


More information about the Gluster-users mailing list