[Gluster-users] Gluster-users Digest, Vol 54, Issue 27

Israel Shirk israelshirk at gmail.com
Mon Oct 22 14:55:18 UTC 2012


On 10/21/2012 02:18 PM, Israel Shirk wrote:
> Haris, try the NFS mount.  Gluster typically triggers healing through
> the client, so if you skip the client, nothing heals.
Not true anymore. With 3.3 there's a self-heal daemon that will handle
the heals. You do risk reading stale data if you don't read through the
client though.

It's not going to trigger this by writing to the brick itself, and doing
'ls -l' or stat is kind of a mess.

> The native Gluster client tends to be really @#$@#$@# stupid.  It'll
> send reads to Singapore while you're in Virginia (and there are bricks
> 0.2ms away),
False. The client will read from the first-to-respond. Yes, if Singapore
is responding faster than Virginia you might want to figure out why
Virginia is so overloaded that it's taking more than 200ms to respond,
but really that shouldn't be the case.

Totally agree.  Should not be the case.  Wish it was false.  But when it
suddenly sends EVERYTHING to Singapore from Virginia, not touching the
servers in Virginia AT ALL, and you can mount them using NFS and it works
great, I gotta point my finger at the client.  You can disagree however
much you want, but I'm talking from very frustrating experiences here.

> then when healing is needed it will take a bunch of time to do that,
> all the while it's blocking your application or web server, which
> under heavy loads will cause your entire application to buckle.
False. 3.3 uses granular locking which won't block your application.

Blocking your application as in, taking so many file descriptors due to lag
that the application runs out of file descriptors or connections and locks
up.

> The NFS client is dumb, which in my mind is a lot better - it'll just
> do what you tell it to do and allow you to compensate for connectivity
> issues yourself using something like Linux-HA.
The "NFS client" is probably more apt than you meant. It is both
GlusterFS client and NFS server, and it connects to the bricks and
performs reads and self-heal in exactly the same way as the fuse client.

It works, where the native Gluster client wakes you up for hours in the
middle of the night.  You can argue about internals, I'm just saying one
works, the other fails miserably.

>
> You have to keep in mind when using gluster that 99% of the people
> using it are running their tests on a single server (see the recent
> notes about how testing patches are only performed on a single server),
False. There are many more testers than that, most of which are outside
of the development team.

You are always right and I am always wrong.  Congratulations :)

I'm simply saying that I keep hearing that Gluster is supposed to work
great on distributed applications (as in distributed to more than one
place), but the reality of the situation is that it's really buggy and
nobody is willing to acknowledge it.  There are big problems with using
Gluster in this way, and I can't see that being the case when it's being
tested for this.  If you're having better luck with it in a high-traffic
production environment, by all means update the docs so others don't have
to go through the downtime that results from Gluster's undocumented issues.

> and most applications don't distribute or mirror to bricks more than a
> few hundred yards away.  Their idea of geo-replication is that you
> send your writes to the other side of the world (which may or may not
> be up at the moment), then twiddle your thumbs for a while and hope it
> gets back to you.  So, that said, it's possible to get it to work, and
> it's almost better than lsyncd, but it'll still make you cry periodically.
>
> Ok, back to happy time :)
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20121022/3f2d1a5c/attachment.html>


More information about the Gluster-users mailing list