[Gluster-devel] ping timeout

Gordan Bobic gordan at bobich.net
Thu Mar 25 12:13:17 UTC 2010


Stephan von Krawczynski wrote:
> On Thu, 25 Mar 2010 10:43:10 +0000
> Gordan Bobic <gordan at bobich.net> wrote:
> 
>> Stephan von Krawczynski wrote:
>>> On Thu, 25 Mar 2010 09:56:24 +0000
>>> Gordan Bobic <gordan at bobich.net> wrote:
>>>
>>>>> If I have your mentioned scenario right, including what you believe 
>>>>> should happen:
>>>>>
>>>>>     * First node goes down. Simple enough.
>>>>>     * Second node has new file operations performed on it that the first
>>>>>       node does not get.
>>>>>     * First node comes up. It is completely fenced from all other
>>>>>       machines to get itself in sync with the second node.
>>>>>     * Second node goes down. Is it before/after first node is synced?
>>>>>           o If it is before then you have a fully isolated FS that is
>>>>>             not accessible.
>>>>>           o If it is after then you don't have a problem.
>>>>>
>>>>> I would suggest writing a script and performing some firewalling to 
>>>>> perform the fencing.
>>>> This is not really good enough - you need an out-of-band fencing device 
>>>> that you can use to forcibly down the node that disconnected, e.g. 
>>>> remote power-off by power management (e.g. UPS or a network controllable 
>>>> power bar) or remote server management (Dell DRAC, Raritan eRIC G4, HP 
>>>> iLO, Sun LOM, etc.). When the node gets rebooted, it has to notice there 
>>>> are other nodes already up and specifically set itself into such a mode 
>>>> that it will lose any contest on being the source node for resync until 
>>>> it has fully checked all the files' metadata against it's peers.
>>>>
>>>>> I believe you can run ls -R on the file-system to 
>>>>> get it in sync. You would need to mount glfs locally on the first node, 
>>>>> get it in sync, then open the firewall ports afterward. Is that an 
>>>>> appropriate solution?
>>>> The problem is that firewalling would have to be applied by every node 
>>>> other than the node that dropped off, and this would need to be 
>>>> communicated to all the other nodes, and they would have to confirm 
>>>> before the fencing action is deemed to have succeeded. This is a lot 
>>>> more complex and error prone compared to just using a single point of 
>>>> fencing for each node such as a network controlled power bar.
>>>> (e.g. 
>>>> http://www.linuxfordevices.com/c/a/News/Entrylevel-4port-IP-power-switch-runs-Linux/
>>>> )
>>> Let me add some thoughts here:
>>> First it looks obvious to me that fencing is not needed for glusterfs in the
>>> described cases. If your first node comes up again it will not deliver data
>>> that is not in-sync with the second node, that is what glusterfs is all about.
>> Not quite - there are a lot of failure modes that involve network 
>> partitioning that WILL cause split-brain and unhealable files.
> 
> I was talking about the given example. Of course you may create any number of
> setups that have a potential to explode without chance for restauration.
> My general advice would be to try to keep the network setup as simple as
> possible, because this is obviously one major source of destruction.
> Creating a really fault-tolerant setup cannot only depend on the cluster fs
> used, because whatever you use, none will save you in every case.
> But if you design the setup carefully around glusterfs chances are you get
> away with fault scenarios where others are just plain dead.

Or at least partially alive / partially potentially corrupted vs. plain 
dead. For some use cases that is an advantage. For others no service is 
preferable to potentially corrupted files.

> And there is a
> good bunch of troubles you cannot run into per design, namely storage issues.

There's an extra layer of recoverability, but most of the storage 
failure modes still exist, albeit one step further down the stack.

>>> Now, when your second nodes goes down while the first is not completely synced
>>> you only have these choices:
>>> 1. Blow up the setup and deliver nothing
>>> 2. Deliver what the first node actually has.
>>> It looks obvious that the second choice is preferred because whatever the
>>> out-of-sync data is, there is likely in-sync data too to be served. And so you
>>> are at least partly saved.
>> But are opening yourself to the prospect of having files that cannot be 
>> healed. I can think of plenty of cases where this is a worse case 
>> scenario than just blocking/fencing.
> 
> In fact this is only a matter of how paranoid you want to be. You can reduce
> the risk of seeing these cases by adding additional bricks to your
> replication. Since you only need one brick out of X staying alive for
> eliminating the risk it is in fact all up to you.

That would only be the case if there was the concept of quorum in glfs, 
which AFAIK, there isn't. You'd need some sort of a quorum based voting 
mechanism on which node to kick out of the cluster/fence, and then 
arrange that in such a way that it fulfills the requirements for 
availability and fault tolerance. If your cluster has to be quorate, 
then there can be no split-brain.

All glfs does in a way is massively improve the granularity of cluster 
fs operations, from fs level down to file level. But all the basic 
clustering concepts and requirements remain the same (fencing, quorum, 
split-brain, etc.)

>> You are also forgetting that the failure mode you are describing 
>> involves a previous failure, too. If A isn't in sync with B and B goes 
>> down, that means A went down first, but came back up.
> 
> I don't quite get the argument here. Isn't it intended somehow that A comes
> back. It should come back anyways, at least by admin interaction. Still the
> service should be kept up and the original setup restored, or not?

I would argue that A shouldn't be allowed to become an active (or 
perhaps this could be relaxed to not being allowed to become the ONLY) 
participant in the cluster until it is fully up to date with it's peer(s).

>>> The real hot topic here is how the time between the first node coming back and
>>> the second node going down is used for an optimal self heal procedure. The
>>> risk of split brain is lower the faster the self heal procedure works.
>> I'd say that any risk of split brain needs to be suitably addressed. A 
>> solution that includes fencing (to prevent split-brain from occurring in 
>> the first place) plus keeping a separate list of files that are "dirty" 
>> so they can be resynced explicitly before a node is allowed to fully 
>> re-join might be a reasonable way to go. This is similar to what DRBD 
>> does (it keeps a bitmap of dirty blocks for fast resync).
> 
> The re-join is implicit in glusterfs. For files not needing self-heal the
> re-join time is equal to the upcoming of glusterfsd. For files needing
> self-heal the re-join takes place right after their healing. And you don't have
> to do anything, it is simply glusterfs behaviour.

The problem is that it lacks guarantees about the node providing service 
being up to date if a more up to date node goes down. This may be 
unacceptable in a lot of cases. The resync is done lazily on-access, so 
the only way to deal with the resync is to issue ls -laR. As you pointed 
out that can be very slow on a large data set, so some way of each node 
keeping a list of dirty files for each disconnected node would provide a 
potentially quicker way to resync, in addition to providing a mechanism 
by which a decision could be made on whether a node is ready to take 
over the work if all of it's peers were to go down (i.e. ensure that a 
node cannot provide service for a file that has been marked on it as 
dirty by another node).

>>> It is obvious that the optimal strategy has to know exactly what files to
>>> heal. And I just made a proposal for that in another post.
>>> Doing ls -lR will be no good strategy for simple runtime reasons if you have
>>> large amounts of data.
>> I agree, although I'm pretty sure there can be failure modes where it is 
>> necessary.
> 
> Well, the good thing about it is, it's all your choice. If you want to check
> out the situation you can ls at any suitable time. But if runtime is a risk
> factor you need a dirty file list.

Definitely agree on the dirty file list.

>> Then again, if you have that big a data set, you should be 
>> partitioning it in smaller RAID1 stripes with RAID0 stripes on top. That 
>> way the time to resync any server to it's peer is kept manageable. 
>> Simply running a 100TB mirror isn't sensible. Keeping 100 1TB mirrors is 
>> much more workable cometh resync time.
> 
> In my eyes design and implementation should allow both, and they should be
> equally manageable. And there must be no difference in resync time if you
> have an equal number of dirty files. Since glusterfs has kind of a local
> file-by-file design the total fs size should not make a difference.

I'm not so sure about that. Both should be implementable, but expecting 
both to be equally manageable from the performance and resilience 
perspective is misguided. You wouldn't prefer a RAID 01 over RAID 10, 
would you???

Gordan





More information about the Gluster-devel mailing list