[Gluster-users] Replace brick 3.4.2 with 3.6.2?
jgardeniers at objectmastery.com
Tue Feb 24 23:18:10 UTC 2015
Problem solved, more or less.
After reading Aytac's comment about 3.6.2 not being considered stable
yet I removed it from the new node, removed /var/lib/glusterd/,
rebooted (just to be sure) and installed 3.5.3. After detaching and
re-probing the peer the replace-brick command worked and the volume is
currently happily undergoing a self-heal. At a later and more convenient
time I'll upgrade the 3.4.2 node to the same version. As previously
stated, I cannot upgrade the clients, so they will just have to stay
where they are.
On 25/02/15 08:27, aytac zeren wrote:
> Hi John,
> 3.6.2 is a major release and introduces some new features in cluster
> wide concept. Additionally it is not stable yet. The best way of doing
> it would be establishing another 3.6.2 cluster, accessing 3.4.0
> cluster via nfs or native client, and copying content to 3.6.2 cluster
> gradually. While your volume size decreases on 3.4.0 cluster, you can
> unmount 3.4.0 members from cluster, upgrade them and add 3.6.2 trusted
> pool with brick. Please be careful while doing this operation, as
> number of nodes in your cluster should be reliable with your cluster
> design. (Stripped, Replicated, Distributed or a combination of them).
> Notice: I don't take any responsibility on the actions you have
> undertaken with regards to my recommendations, as my recommendations
> are general and does not take your archtiectural design into
> On Tue, Feb 24, 2015 at 11:19 PM, John Gardeniers
> <jgardeniers at objectmastery.com <mailto:jgardeniers at objectmastery.com>>
> Hi All,
> We have a gluster volume consisting of a single brick, using
> replica 2. Both nodes are currently running gluster 3.4.2 and I
> wish to replace one of the nodes with a new server (rigel), which
> has gluster 3.6.2
> Following this link:
> I tried to do a replace brick but got "volume replace-brick:
> failed: Host rigel is not in 'Peer in Cluster' state". Is this due
> to a version incompatibility or is it due to some other issue? A
> bit of googling reveals the error message in bug reports but I've
> not yet found anything that applies to this specific case.
> Incidentally, the clients (RHEV bare metal hypervisors, so we have
> no upgrade option) are running 3.4.0. Will this be a problem if
> the nodes are on 3.6.2?
> Gluster-users mailing list
> Gluster-users at gluster.org <mailto:Gluster-users at gluster.org>
> This email has been scanned by the Symantec Email Security.cloud service.
> For more information please visit http://www.symanteccloud.com
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the Gluster-users