[Gluster-users] Rebalance + VM corruption - current status and request for feedback

Mahdi Adnan mahdi.adnan at outlook.com
Sat May 20 09:17:41 UTC 2017


Good morning,


SIG repository does not have the latest glusterfs 3.10.2.

Do you have any idea when it's going to be updated ?

Is there any other recommended place to get the latest rpms ?

--

Respectfully
Mahdi A. Mahdi

________________________________
From: Mahdi Adnan <mahdi.adnan at outlook.com>
Sent: Friday, May 19, 2017 6:14:05 PM
To: Krutika Dhananjay; gluster-user
Cc: Gandalf Corvotempesta; Lindsay Mathieson; Kevin Lemonnier
Subject: Re: Rebalance + VM corruption - current status and request for feedback


Thank you so much mate.

I'll finish the test tomorrow and let you know the results.

--

Respectfully
Mahdi A. Mahdi

________________________________
From: Krutika Dhananjay <kdhananj at redhat.com>
Sent: Wednesday, May 17, 2017 6:59:20 AM
To: gluster-user
Cc: Gandalf Corvotempesta; Lindsay Mathieson; Kevin Lemonnier; Mahdi Adnan
Subject: Rebalance + VM corruption - current status and request for feedback

Hi,

In the past couple of weeks, we've sent the following fixes concerning VM corruption upon doing rebalance - https://review.gluster.org/#/q/status:merged+project:glusterfs+branch:master+topic:bug-1440051

These fixes are very much part of the latest 3.10.2 release.

Satheesaran within Red Hat also verified that they work and he's not seeing corruption issues anymore.

I'd like to hear feedback from the users themselves on these fixes (on your test environments to begin with) before even changing the status of the bug to CLOSED.

Although 3.10.2 has a patch that prevents rebalance sub-commands from being executed on sharded volumes, you can override the check by using the 'force' option.

For example,

# gluster volume rebalance myvol start force

Very much looking forward to hearing from you all.

Thanks,
Krutika
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20170520/e1cb5c42/attachment.html>


More information about the Gluster-users mailing list