[Gluster-users] [ovirt-users] gluster split-brain on vm images volume

John Ewing johnewing1 at gmail.com
Wed Aug 6 14:30:23 UTC 2014


Hi Milos,

You can do this already, by changing the baseurl format to look like this.
Note the 3.4 between glusterfs and latest.

baseurl=
http://download.gluster.org/pub/gluster/glusterfs/3.4/LATEST/EPEL.repo/epel-$releasever/$basearch/

I tend not to have yum auto updates on anything production because even
minor version upgrades can cause unforeseen problems.

J.


On Mon, Aug 4, 2014 at 4:01 PM, Milos Kozak <milos.kozak at lejmr.com> wrote:

> Let me contribute to the upgrade process. In my case I ended up with the
> same problem, but in my case it was on testing setup. In that case the
> problem was caused by automatic night upgrade, which I have turned on on my
> Centos servers. Everytime you release new RPMs my servers automatically
> upgrade - with minor version it is not problem usually, but major one..
>
> So I would like to suggest to make directory hierarchy according to
> version .. To provide folders 3.4 / 3.5 / 3.6 / Latest in your repository
> as other projects do.
>
> This wont resolve this kind of issue, but when you are releasing 3.6 my
> servers are not going to upgrade automatically in the middle of the night.
>
> Thanks MIlos
>
>
> On 8/2/2014 4:37 PM, Pranith Kumar Karampuri wrote:
>
>>
>> On 08/03/2014 01:43 AM, Tiemen Ruiten wrote:
>>
>>> On 08/02/14 20:12, Pranith Kumar Karampuri wrote:
>>>
>>>> On 08/02/2014 06:50 PM, Tiemen Ruiten wrote:
>>>>
>>>>> Hello,
>>>>>
>>>>> I'm cross-posting this from ovirt-users:
>>>>>
>>>>> I have an oVirt environment backed by a two-node Gluster-cluster.
>>>>> Yesterday I decided to upgrade to from GlusterFS 3.5.1 to 3.5.2, but
>>>>> that caused the gluster daemon to stop and now I have several lines
>>>>> like
>>>>> this in my log for the volume that hosts the VM images, called vmimage:
>>>>>
>>>> Did the upgrade happen when the volume is still running?
>>>>
>>> Yes...
>>>
>> I guess we need to document upgrade process if not already done.
>>
>>> [2014-08-02 12:56:20.994767] E
>>>>> [afr-self-heal-common.c:233:afr_sh_print_split_brain_log]
>>>>> 0-vmimage-replicate-0: Unable to self-heal contents of
>>>>> 'f09c211d-eb49-4715-8031-85a5a8f39f18' (possible split-brain). Please
>>>>> delete the file from all but the preferred subvolume.- Pending
>>>>> matrix: [
>>>>> [ 0 408 ] [ 180 0 ] ]
>>>>>
>>>> This is the document that talks about how to resolve split-brains in
>>>> gluster.
>>>> https://github.com/gluster/glusterfs/blob/master/doc/split-brain.md
>>>>
>>> OK, I will try that.
>>>
>>>> What I would like to do is the following, since I'm not 100% happy
>>>>> anyway with how the volume is setup:
>>>>>
>>>>> - Stop VDSM on the oVirt hosts / unmount the volume
>>>>> - Stop the current vmimage volume and rename it
>>>>>
>>>> Is this a gluster volume? gluster volumes can't be renamed..
>>>>
>>> That surprise me: in the man page I find this:
>>>
>>> volume rename <VOLNAME> <NEW-VOLNAME>
>>>                Rename the specified volume.
>>>
>> One more documentation bug :-(
>>
>>>
>>>  - Create a new vmimage volume
>>>>> - Copy the images from one of the nodes
>>>>>
>>>> where will these images be copied to? on to the gluster mount? if yes
>>>> then there is no need to sync.
>>>>
>>> OK, I will try to resolve with the guide for split-brain scenarios first.
>>>
>> Let us know if you have any doubts in this document.
>>
>>> - Start the volume and let it sync
>>>>> - Restart VDSM / mount the volume
>>>>>
>>>>> Is this going to work? Or is there critical metadata that will not be
>>>>> transferred with these steps?
>>>>>
>>>>> Tiemen
>>>>>
>>>>>
>>>>>
>> _______________________________________________
>> Gluster-users mailing list
>> Gluster-users at gluster.org
>> http://supercolony.gluster.org/mailman/listinfo/gluster-users
>>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://supercolony.gluster.org/mailman/listinfo/gluster-users
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20140806/6a15e0b5/attachment.html>


More information about the Gluster-users mailing list