<div dir="ltr">But even after that fix, it is still leading to pause. And these are the two updates on what the developers are doing as per my understanding. So that workflow is not stable yet IMO.<br></div><div class="gmail_extra"><br><div class="gmail_quote">On Thu, Apr 27, 2017 at 4:51 PM, Serkan Çoban <span dir="ltr"><<a href="mailto:cobanserkan@gmail.com" target="_blank">cobanserkan@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">I think this is he fix Gandalf asking for:<br>
<a href="https://github.com/gluster/glusterfs/commit/6e3054b42f9aef1e35b493fbb002ec47e1ba27ce" rel="noreferrer" target="_blank">https://github.com/gluster/<wbr>glusterfs/commit/<wbr>6e3054b42f9aef1e35b493fbb002ec<wbr>47e1ba27ce</a><br>
<br>
<br>
On Thu, Apr 27, 2017 at 2:03 PM, Pranith Kumar Karampuri<br>
<div class="HOEnZb"><div class="h5"><<a href="mailto:pkarampu@redhat.com">pkarampu@redhat.com</a>> wrote:<br>
> I am very positive about the two things I told you. These are the latest<br>
> things that happened for VM corruption with rebalance.<br>
><br>
> On Thu, Apr 27, 2017 at 4:30 PM, Gandalf Corvotempesta<br>
> <<a href="mailto:gandalf.corvotempesta@gmail.com">gandalf.corvotempesta@gmail.<wbr>com</a>> wrote:<br>
>><br>
>> I think we are talking about a different bug.<br>
>><br>
>> Il 27 apr 2017 12:58 PM, "Pranith Kumar Karampuri" <<a href="mailto:pkarampu@redhat.com">pkarampu@redhat.com</a>><br>
>> ha scritto:<br>
>>><br>
>>> I am not a DHT developer, so some of what I say could be a little wrong.<br>
>>> But this is what I gather.<br>
>>> I think they found 2 classes of bugs in dht<br>
>>> 1) Graceful fop failover when rebalance is in progress is missing for<br>
>>> some fops, that lead to VM pause.<br>
>>><br>
>>> I see that <a href="https://review.gluster.org/17085" rel="noreferrer" target="_blank">https://review.gluster.org/<wbr>17085</a> got merged on 24th on master<br>
>>> for this. I see patches are posted for 3.8.x for this one.<br>
>>><br>
>>> 2) I think there is some work needs to be done for dht_[f]xattrop. I<br>
>>> believe this is the next step that is underway.<br>
>>><br>
>>><br>
>>> On Thu, Apr 27, 2017 at 12:13 PM, Gandalf Corvotempesta<br>
>>> <<a href="mailto:gandalf.corvotempesta@gmail.com">gandalf.corvotempesta@gmail.<wbr>com</a>> wrote:<br>
>>>><br>
>>>> Updates on this critical bug ?<br>
>>>><br>
>>>> Il 18 apr 2017 8:24 PM, "Gandalf Corvotempesta"<br>
>>>> <<a href="mailto:gandalf.corvotempesta@gmail.com">gandalf.corvotempesta@gmail.<wbr>com</a>> ha scritto:<br>
>>>>><br>
>>>>> Any update ?<br>
>>>>> In addition, if this is a different bug but the "workflow" is the same<br>
>>>>> as the previous one, how is possible that fixing the previous bug<br>
>>>>> triggered this new one ?<br>
>>>>><br>
>>>>> Is possible to have some details ?<br>
>>>>><br>
>>>>> 2017-04-04 16:11 GMT+02:00 Krutika Dhananjay <<a href="mailto:kdhananj@redhat.com">kdhananj@redhat.com</a>>:<br>
>>>>> > Nope. This is a different bug.<br>
>>>>> ><br>
>>>>> > -Krutika<br>
>>>>> ><br>
>>>>> > On Mon, Apr 3, 2017 at 5:03 PM, Gandalf Corvotempesta<br>
>>>>> > <<a href="mailto:gandalf.corvotempesta@gmail.com">gandalf.corvotempesta@gmail.<wbr>com</a>> wrote:<br>
>>>>> >><br>
>>>>> >> This is a good news<br>
>>>>> >> Is this related to the previously fixed bug?<br>
>>>>> >><br>
>>>>> >> Il 3 apr 2017 10:22 AM, "Krutika Dhananjay" <<a href="mailto:kdhananj@redhat.com">kdhananj@redhat.com</a>> ha<br>
>>>>> >> scritto:<br>
>>>>> >>><br>
>>>>> >>> So Raghavendra has an RCA for this issue.<br>
>>>>> >>><br>
>>>>> >>> Copy-pasting his comment here:<br>
>>>>> >>><br>
>>>>> >>> <RCA><br>
>>>>> >>><br>
>>>>> >>> Following is a rough algorithm of shard_writev:<br>
>>>>> >>><br>
>>>>> >>> 1. Based on the offset, calculate the shards touched by current<br>
>>>>> >>> write.<br>
>>>>> >>> 2. Look for inodes corresponding to these shard files in itable.<br>
>>>>> >>> 3. If one or more inodes are missing from itable, issue mknod for<br>
>>>>> >>> corresponding shard files and ignore EEXIST in cbk.<br>
>>>>> >>> 4. resume writes on respective shards.<br>
>>>>> >>><br>
>>>>> >>> Now, imagine a write which falls to an existing "shard_file". For<br>
>>>>> >>> the<br>
>>>>> >>> sake of discussion lets consider a distribute of three subvols -<br>
>>>>> >>> s1, s2, s3<br>
>>>>> >>><br>
>>>>> >>> 1. "shard_file" hashes to subvolume s2 and is present on s2<br>
>>>>> >>> 2. add a subvolume s4 and initiate a fix layout. The layout of<br>
>>>>> >>> ".shard"<br>
>>>>> >>> is fixed to include s4 and hash ranges are changed.<br>
>>>>> >>> 3. write that touches "shard_file" is issued.<br>
>>>>> >>> 4. The inode for "shard_file" is not present in itable after a<br>
>>>>> >>> graph<br>
>>>>> >>> switch and features/shard issues an mknod.<br>
>>>>> >>> 5. With new layout of .shard, lets say "shard_file" hashes to s3<br>
>>>>> >>> and<br>
>>>>> >>> mknod (shard_file) on s3 succeeds. But, the shard_file is already<br>
>>>>> >>> present on<br>
>>>>> >>> s2.<br>
>>>>> >>><br>
>>>>> >>> So, we have two files on two different subvols of dht representing<br>
>>>>> >>> same<br>
>>>>> >>> shard and this will lead to corruption.<br>
>>>>> >>><br>
>>>>> >>> </RCA><br>
>>>>> >>><br>
>>>>> >>> Raghavendra will be sending out a patch in DHT to fix this issue.<br>
>>>>> >>><br>
>>>>> >>> -Krutika<br>
>>>>> >>><br>
>>>>> >>><br>
>>>>> >>> On Tue, Mar 28, 2017 at 11:49 PM, Pranith Kumar Karampuri<br>
>>>>> >>> <<a href="mailto:pkarampu@redhat.com">pkarampu@redhat.com</a>> wrote:<br>
>>>>> >>>><br>
>>>>> >>>><br>
>>>>> >>>><br>
>>>>> >>>> On Mon, Mar 27, 2017 at 11:29 PM, Mahdi Adnan<br>
>>>>> >>>> <<a href="mailto:mahdi.adnan@outlook.com">mahdi.adnan@outlook.com</a>><br>
>>>>> >>>> wrote:<br>
>>>>> >>>>><br>
>>>>> >>>>> Hi,<br>
>>>>> >>>>><br>
>>>>> >>>>><br>
>>>>> >>>>> Do you guys have any update regarding this issue ?<br>
>>>>> >>>><br>
>>>>> >>>> I do not actively work on this issue so I do not have an accurate<br>
>>>>> >>>> update, but from what I heard from Krutika and Raghavendra(works<br>
>>>>> >>>> on DHT) is:<br>
>>>>> >>>> Krutika debugged initially and found that the issue seems more<br>
>>>>> >>>> likely to be<br>
>>>>> >>>> in DHT, Satheesaran who helped us recreate this issue in lab found<br>
>>>>> >>>> that just<br>
>>>>> >>>> fix-layout without rebalance also caused the corruption 1 out of 3<br>
>>>>> >>>> times.<br>
>>>>> >>>> Raghavendra came up with a possible RCA for why this can happen.<br>
>>>>> >>>> Raghavendra(CCed) would be the right person to provide accurate<br>
>>>>> >>>> update.<br>
>>>>> >>>>><br>
>>>>> >>>>><br>
>>>>> >>>>><br>
>>>>> >>>>> --<br>
>>>>> >>>>><br>
>>>>> >>>>> Respectfully<br>
>>>>> >>>>> Mahdi A. Mahdi<br>
>>>>> >>>>><br>
>>>>> >>>>> ______________________________<wbr>__<br>
>>>>> >>>>> From: Krutika Dhananjay <<a href="mailto:kdhananj@redhat.com">kdhananj@redhat.com</a>><br>
>>>>> >>>>> Sent: Tuesday, March 21, 2017 3:02:55 PM<br>
>>>>> >>>>> To: Mahdi Adnan<br>
>>>>> >>>>> Cc: Nithya Balachandran; Gowdappa, Raghavendra; Susant Palai;<br>
>>>>> >>>>> <a href="mailto:gluster-users@gluster.org">gluster-users@gluster.org</a> List<br>
>>>>> >>>>><br>
>>>>> >>>>> Subject: Re: [Gluster-users] Gluster 3.8.10 rebalance VMs<br>
>>>>> >>>>> corruption<br>
>>>>> >>>>><br>
>>>>> >>>>> Hi,<br>
>>>>> >>>>><br>
>>>>> >>>>> So it looks like Satheesaran managed to recreate this issue. We<br>
>>>>> >>>>> will be<br>
>>>>> >>>>> seeking his help in debugging this. It will be easier that way.<br>
>>>>> >>>>><br>
>>>>> >>>>> -Krutika<br>
>>>>> >>>>><br>
>>>>> >>>>> On Tue, Mar 21, 2017 at 1:35 PM, Mahdi Adnan<br>
>>>>> >>>>> <<a href="mailto:mahdi.adnan@outlook.com">mahdi.adnan@outlook.com</a>><br>
>>>>> >>>>> wrote:<br>
>>>>> >>>>>><br>
>>>>> >>>>>> Hello and thank you for your email.<br>
>>>>> >>>>>> Actually no, i didn't check the gfid of the vms.<br>
>>>>> >>>>>> If this will help, i can setup a new test cluster and get all<br>
>>>>> >>>>>> the data<br>
>>>>> >>>>>> you need.<br>
>>>>> >>>>>><br>
>>>>> >>>>>> Get Outlook for Android<br>
>>>>> >>>>>><br>
>>>>> >>>>>><br>
>>>>> >>>>>> From: Nithya Balachandran<br>
>>>>> >>>>>> Sent: Monday, March 20, 20:57<br>
>>>>> >>>>>> Subject: Re: [Gluster-users] Gluster 3.8.10 rebalance VMs<br>
>>>>> >>>>>> corruption<br>
>>>>> >>>>>> To: Krutika Dhananjay<br>
>>>>> >>>>>> Cc: Mahdi Adnan, Gowdappa, Raghavendra, Susant Palai,<br>
>>>>> >>>>>> <a href="mailto:gluster-users@gluster.org">gluster-users@gluster.org</a> List<br>
>>>>> >>>>>><br>
>>>>> >>>>>> Hi,<br>
>>>>> >>>>>><br>
>>>>> >>>>>> Do you know the GFIDs of the VM images which were corrupted?<br>
>>>>> >>>>>><br>
>>>>> >>>>>> Regards,<br>
>>>>> >>>>>><br>
>>>>> >>>>>> Nithya<br>
>>>>> >>>>>><br>
>>>>> >>>>>> On 20 March 2017 at 20:37, Krutika Dhananjay<br>
>>>>> >>>>>> <<a href="mailto:kdhananj@redhat.com">kdhananj@redhat.com</a>><br>
>>>>> >>>>>> wrote:<br>
>>>>> >>>>>><br>
>>>>> >>>>>> I looked at the logs.<br>
>>>>> >>>>>><br>
>>>>> >>>>>> From the time the new graph (since the add-brick command you<br>
>>>>> >>>>>> shared<br>
>>>>> >>>>>> where bricks 41 through 44 are added) is switched to (line 3011<br>
>>>>> >>>>>> onwards in<br>
>>>>> >>>>>> nfs-gfapi.log), I see the following kinds of errors:<br>
>>>>> >>>>>><br>
>>>>> >>>>>> 1. Lookups to a bunch of files failed with ENOENT on both<br>
>>>>> >>>>>> replicas<br>
>>>>> >>>>>> which protocol/client converts to ESTALE. I am guessing these<br>
>>>>> >>>>>> entries got<br>
>>>>> >>>>>> migrated to<br>
>>>>> >>>>>><br>
>>>>> >>>>>> other subvolumes leading to 'No such file or directory' errors.<br>
>>>>> >>>>>><br>
>>>>> >>>>>> DHT and thereafter shard get the same error code and log the<br>
>>>>> >>>>>> following:<br>
>>>>> >>>>>><br>
>>>>> >>>>>> 0 [2017-03-17 14:04:26.353444] E [MSGID: 109040]<br>
>>>>> >>>>>> [dht-helper.c:1198:dht_<wbr>migration_complete_check_task]<br>
>>>>> >>>>>> 17-vmware2-dht:<br>
>>>>> >>>>>> <gfid:a68ce411-e381-46a3-93cd-<wbr>d2af6a7c3532>: failed to<br>
>>>>> >>>>>> lookup the file<br>
>>>>> >>>>>> on vmware2-dht [Stale file handle]<br>
>>>>> >>>>>> 1 [2017-03-17 14:04:26.353528] E [MSGID: 133014]<br>
>>>>> >>>>>> [shard.c:1253:shard_common_<wbr>stat_cbk] 17-vmware2-shard: stat<br>
>>>>> >>>>>> failed:<br>
>>>>> >>>>>> a68ce411-e381-46a3-93cd-<wbr>d2af6a7c3532 [Stale file handle]<br>
>>>>> >>>>>><br>
>>>>> >>>>>> which is fine.<br>
>>>>> >>>>>><br>
>>>>> >>>>>> 2. The other kind are from AFR logging of possible split-brain<br>
>>>>> >>>>>> which I<br>
>>>>> >>>>>> suppose are harmless too.<br>
>>>>> >>>>>> [2017-03-17 14:23:36.968883] W [MSGID: 108008]<br>
>>>>> >>>>>> [afr-read-txn.c:228:afr_read_<wbr>txn] 17-vmware2-replicate-13:<br>
>>>>> >>>>>> Unreadable<br>
>>>>> >>>>>> subvolume -1 found with event generation 2 for gfid<br>
>>>>> >>>>>> 74d49288-8452-40d4-893e-<wbr>ff4672557ff9. (Possible split-brain)<br>
>>>>> >>>>>><br>
>>>>> >>>>>> Since you are saying the bug is hit only on VMs that are<br>
>>>>> >>>>>> undergoing IO<br>
>>>>> >>>>>> while rebalance is running (as opposed to those that remained<br>
>>>>> >>>>>> powered off),<br>
>>>>> >>>>>><br>
>>>>> >>>>>> rebalance + IO could be causing some issues.<br>
>>>>> >>>>>><br>
>>>>> >>>>>> CC'ing DHT devs<br>
>>>>> >>>>>><br>
>>>>> >>>>>> Raghavendra/Nithya/Susant,<br>
>>>>> >>>>>><br>
>>>>> >>>>>> Could you take a look?<br>
>>>>> >>>>>><br>
>>>>> >>>>>> -Krutika<br>
>>>>> >>>>>><br>
>>>>> >>>>>><br>
>>>>> >>>>>> On Sun, Mar 19, 2017 at 4:55 PM, Mahdi Adnan<br>
>>>>> >>>>>> <<a href="mailto:mahdi.adnan@outlook.com">mahdi.adnan@outlook.com</a>><br>
>>>>> >>>>>> wrote:<br>
>>>>> >>>>>><br>
>>>>> >>>>>> Thank you for your email mate.<br>
>>>>> >>>>>><br>
>>>>> >>>>>> Yes, im aware of this but, to save costs i chose replica 2, this<br>
>>>>> >>>>>> cluster is all flash.<br>
>>>>> >>>>>><br>
>>>>> >>>>>> In version 3.7.x i had issues with ping timeout, if one hosts<br>
>>>>> >>>>>> went<br>
>>>>> >>>>>> down for few seconds the whole cluster hangs and become<br>
>>>>> >>>>>> unavailable, to<br>
>>>>> >>>>>> avoid this i adjusted the ping timeout to 5 seconds.<br>
>>>>> >>>>>><br>
>>>>> >>>>>> As for choosing Ganesha over gfapi, VMWare does not support<br>
>>>>> >>>>>> Gluster<br>
>>>>> >>>>>> (FUSE or gfapi) im stuck with NFS for this volume.<br>
>>>>> >>>>>><br>
>>>>> >>>>>> The other volume is mounted using gfapi in oVirt cluster.<br>
>>>>> >>>>>><br>
>>>>> >>>>>><br>
>>>>> >>>>>><br>
>>>>> >>>>>> --<br>
>>>>> >>>>>><br>
>>>>> >>>>>> Respectfully<br>
>>>>> >>>>>> Mahdi A. Mahdi<br>
>>>>> >>>>>><br>
>>>>> >>>>>> From: Krutika Dhananjay <<a href="mailto:kdhananj@redhat.com">kdhananj@redhat.com</a>><br>
>>>>> >>>>>> Sent: Sunday, March 19, 2017 2:01:49 PM<br>
>>>>> >>>>>><br>
>>>>> >>>>>> To: Mahdi Adnan<br>
>>>>> >>>>>> Cc: <a href="mailto:gluster-users@gluster.org">gluster-users@gluster.org</a><br>
>>>>> >>>>>> Subject: Re: [Gluster-users] Gluster 3.8.10 rebalance VMs<br>
>>>>> >>>>>> corruption<br>
>>>>> >>>>>><br>
>>>>> >>>>>><br>
>>>>> >>>>>><br>
>>>>> >>>>>> While I'm still going through the logs, just wanted to point out<br>
>>>>> >>>>>> a<br>
>>>>> >>>>>> couple of things:<br>
>>>>> >>>>>><br>
>>>>> >>>>>> 1. It is recommended that you use 3-way replication (replica<br>
>>>>> >>>>>> count 3)<br>
>>>>> >>>>>> for VM store use case<br>
>>>>> >>>>>><br>
>>>>> >>>>>> 2. network.ping-timeout at 5 seconds is way too low. Please<br>
>>>>> >>>>>> change it<br>
>>>>> >>>>>> to 30.<br>
>>>>> >>>>>><br>
>>>>> >>>>>> Is there any specific reason for using NFS-Ganesha over<br>
>>>>> >>>>>> gfapi/FUSE?<br>
>>>>> >>>>>><br>
>>>>> >>>>>> Will get back with anything else I might find or more questions<br>
>>>>> >>>>>> if I<br>
>>>>> >>>>>> have any.<br>
>>>>> >>>>>><br>
>>>>> >>>>>> -Krutika<br>
>>>>> >>>>>><br>
>>>>> >>>>>> On Sun, Mar 19, 2017 at 2:36 PM, Mahdi Adnan<br>
>>>>> >>>>>> <<a href="mailto:mahdi.adnan@outlook.com">mahdi.adnan@outlook.com</a>><br>
>>>>> >>>>>> wrote:<br>
>>>>> >>>>>><br>
>>>>> >>>>>> Thanks mate,<br>
>>>>> >>>>>><br>
>>>>> >>>>>> Kindly, check the attachment.<br>
>>>>> >>>>>><br>
>>>>> >>>>>> --<br>
>>>>> >>>>>><br>
>>>>> >>>>>> Respectfully<br>
>>>>> >>>>>> Mahdi A. Mahdi<br>
>>>>> >>>>>><br>
>>>>> >>>>>> From: Krutika Dhananjay <<a href="mailto:kdhananj@redhat.com">kdhananj@redhat.com</a>><br>
>>>>> >>>>>> Sent: Sunday, March 19, 2017 10:00:22 AM<br>
>>>>> >>>>>><br>
>>>>> >>>>>> To: Mahdi Adnan<br>
>>>>> >>>>>> Cc: <a href="mailto:gluster-users@gluster.org">gluster-users@gluster.org</a><br>
>>>>> >>>>>> Subject: Re: [Gluster-users] Gluster 3.8.10 rebalance VMs<br>
>>>>> >>>>>> corruption<br>
>>>>> >>>>>><br>
>>>>> >>>>>><br>
>>>>> >>>>>><br>
>>>>> >>>>>> In that case could you share the ganesha-gfapi logs?<br>
>>>>> >>>>>><br>
>>>>> >>>>>> -Krutika<br>
>>>>> >>>>>><br>
>>>>> >>>>>> On Sun, Mar 19, 2017 at 12:13 PM, Mahdi Adnan<br>
>>>>> >>>>>> <<a href="mailto:mahdi.adnan@outlook.com">mahdi.adnan@outlook.com</a>> wrote:<br>
>>>>> >>>>>><br>
>>>>> >>>>>> I have two volumes, one is mounted using libgfapi for ovirt<br>
>>>>> >>>>>> mount, the<br>
>>>>> >>>>>> other one is exported via NFS-Ganesha for VMWare which is the<br>
>>>>> >>>>>> one im testing<br>
>>>>> >>>>>> now.<br>
>>>>> >>>>>><br>
>>>>> >>>>>> --<br>
>>>>> >>>>>><br>
>>>>> >>>>>> Respectfully<br>
>>>>> >>>>>> Mahdi A. Mahdi<br>
>>>>> >>>>>><br>
>>>>> >>>>>> From: Krutika Dhananjay <<a href="mailto:kdhananj@redhat.com">kdhananj@redhat.com</a>><br>
>>>>> >>>>>> Sent: Sunday, March 19, 2017 8:02:19 AM<br>
>>>>> >>>>>><br>
>>>>> >>>>>> To: Mahdi Adnan<br>
>>>>> >>>>>> Cc: <a href="mailto:gluster-users@gluster.org">gluster-users@gluster.org</a><br>
>>>>> >>>>>> Subject: Re: [Gluster-users] Gluster 3.8.10 rebalance VMs<br>
>>>>> >>>>>> corruption<br>
>>>>> >>>>>><br>
>>>>> >>>>>><br>
>>>>> >>>>>><br>
>>>>> >>>>>> On Sat, Mar 18, 2017 at 10:36 PM, Mahdi Adnan<br>
>>>>> >>>>>> <<a href="mailto:mahdi.adnan@outlook.com">mahdi.adnan@outlook.com</a>> wrote:<br>
>>>>> >>>>>><br>
>>>>> >>>>>> Kindly, check the attached new log file, i dont know if it's<br>
>>>>> >>>>>> helpful<br>
>>>>> >>>>>> or not but, i couldn't find the log with the name you just<br>
>>>>> >>>>>> described.<br>
>>>>> >>>>>><br>
>>>>> >>>>>><br>
>>>>> >>>>>> No. Are you using FUSE or libgfapi for accessing the volume? Or<br>
>>>>> >>>>>> is it<br>
>>>>> >>>>>> NFS?<br>
>>>>> >>>>>><br>
>>>>> >>>>>><br>
>>>>> >>>>>><br>
>>>>> >>>>>> -Krutika<br>
>>>>> >>>>>><br>
>>>>> >>>>>> --<br>
>>>>> >>>>>><br>
>>>>> >>>>>> Respectfully<br>
>>>>> >>>>>> Mahdi A. Mahdi<br>
>>>>> >>>>>><br>
>>>>> >>>>>> From: Krutika Dhananjay <<a href="mailto:kdhananj@redhat.com">kdhananj@redhat.com</a>><br>
>>>>> >>>>>> Sent: Saturday, March 18, 2017 6:10:40 PM<br>
>>>>> >>>>>><br>
>>>>> >>>>>> To: Mahdi Adnan<br>
>>>>> >>>>>> Cc: <a href="mailto:gluster-users@gluster.org">gluster-users@gluster.org</a><br>
>>>>> >>>>>> Subject: Re: [Gluster-users] Gluster 3.8.10 rebalance VMs<br>
>>>>> >>>>>> corruption<br>
>>>>> >>>>>><br>
>>>>> >>>>>><br>
>>>>> >>>>>><br>
>>>>> >>>>>> mnt-disk11-vmware2.log seems like a brick log. Could you attach<br>
>>>>> >>>>>> the<br>
>>>>> >>>>>> fuse mount logs? It should be right under /var/log/glusterfs/<br>
>>>>> >>>>>> directory<br>
>>>>> >>>>>><br>
>>>>> >>>>>> named after the mount point name, only hyphenated.<br>
>>>>> >>>>>><br>
>>>>> >>>>>> -Krutika<br>
>>>>> >>>>>><br>
>>>>> >>>>>> On Sat, Mar 18, 2017 at 7:27 PM, Mahdi Adnan<br>
>>>>> >>>>>> <<a href="mailto:mahdi.adnan@outlook.com">mahdi.adnan@outlook.com</a>><br>
>>>>> >>>>>> wrote:<br>
>>>>> >>>>>><br>
>>>>> >>>>>> Hello Krutika,<br>
>>>>> >>>>>><br>
>>>>> >>>>>> Kindly, check the attached logs.<br>
>>>>> >>>>>><br>
>>>>> >>>>>> --<br>
>>>>> >>>>>><br>
>>>>> >>>>>> Respectfully<br>
>>>>> >>>>>> Mahdi A. Mahdi<br>
>>>>> >>>>>><br>
>>>>> >>>>>> From: Krutika Dhananjay <<a href="mailto:kdhananj@redhat.com">kdhananj@redhat.com</a>><br>
>>>>> >>>>>><br>
>>>>> >>>>>><br>
>>>>> >>>>>> Sent: Saturday, March 18, 2017 3:29:03 PM<br>
>>>>> >>>>>> To: Mahdi Adnan<br>
>>>>> >>>>>> Cc: <a href="mailto:gluster-users@gluster.org">gluster-users@gluster.org</a><br>
>>>>> >>>>>> Subject: Re: [Gluster-users] Gluster 3.8.10 rebalance VMs<br>
>>>>> >>>>>> corruption<br>
>>>>> >>>>>><br>
>>>>> >>>>>><br>
>>>>> >>>>>><br>
>>>>> >>>>>> Hi Mahdi,<br>
>>>>> >>>>>><br>
>>>>> >>>>>> Could you attach mount, brick and rebalance logs?<br>
>>>>> >>>>>><br>
>>>>> >>>>>> -Krutika<br>
>>>>> >>>>>><br>
>>>>> >>>>>> On Sat, Mar 18, 2017 at 12:14 AM, Mahdi Adnan<br>
>>>>> >>>>>> <<a href="mailto:mahdi.adnan@outlook.com">mahdi.adnan@outlook.com</a>> wrote:<br>
>>>>> >>>>>><br>
>>>>> >>>>>> Hi,<br>
>>>>> >>>>>><br>
>>>>> >>>>>> I have upgraded to Gluster 3.8.10 today and ran the add-brick<br>
>>>>> >>>>>> procedure in a volume contains few VMs.<br>
>>>>> >>>>>><br>
>>>>> >>>>>> After the completion of rebalance, i have rebooted the VMs, some<br>
>>>>> >>>>>> of<br>
>>>>> >>>>>> ran just fine, and others just crashed.<br>
>>>>> >>>>>><br>
>>>>> >>>>>> Windows boot to recovery mode and Linux throw xfs errors and<br>
>>>>> >>>>>> does not<br>
>>>>> >>>>>> boot.<br>
>>>>> >>>>>><br>
>>>>> >>>>>> I ran the test again and it happened just as the first one, but<br>
>>>>> >>>>>> i have<br>
>>>>> >>>>>> noticed only VMs doing disk IOs are affected by this bug.<br>
>>>>> >>>>>><br>
>>>>> >>>>>> The VMs in power off mode started fine and even md5 of the disk<br>
>>>>> >>>>>> file<br>
>>>>> >>>>>> did not change after the rebalance.<br>
>>>>> >>>>>><br>
>>>>> >>>>>> anyone else can confirm this ?<br>
>>>>> >>>>>><br>
>>>>> >>>>>> Volume info:<br>
>>>>> >>>>>><br>
>>>>> >>>>>><br>
>>>>> >>>>>><br>
>>>>> >>>>>> Volume Name: vmware2<br>
>>>>> >>>>>><br>
>>>>> >>>>>> Type: Distributed-Replicate<br>
>>>>> >>>>>><br>
>>>>> >>>>>> Volume ID: 02328d46-a285-4533-aa3a-<wbr>fb9bfeb688bf<br>
>>>>> >>>>>><br>
>>>>> >>>>>> Status: Started<br>
>>>>> >>>>>><br>
>>>>> >>>>>> Snapshot Count: 0<br>
>>>>> >>>>>><br>
>>>>> >>>>>> Number of Bricks: 22 x 2 = 44<br>
>>>>> >>>>>><br>
>>>>> >>>>>> Transport-type: tcp<br>
>>>>> >>>>>><br>
>>>>> >>>>>> Bricks:<br>
>>>>> >>>>>><br>
>>>>> >>>>>> Brick1: gluster01:/mnt/disk1/vmware2<br>
>>>>> >>>>>><br>
>>>>> >>>>>> Brick2: gluster03:/mnt/disk1/vmware2<br>
>>>>> >>>>>><br>
>>>>> >>>>>> Brick3: gluster02:/mnt/disk1/vmware2<br>
>>>>> >>>>>><br>
>>>>> >>>>>> Brick4: gluster04:/mnt/disk1/vmware2<br>
>>>>> >>>>>><br>
>>>>> >>>>>> Brick5: gluster01:/mnt/disk2/vmware2<br>
>>>>> >>>>>><br>
>>>>> >>>>>> Brick6: gluster03:/mnt/disk2/vmware2<br>
>>>>> >>>>>><br>
>>>>> >>>>>> Brick7: gluster02:/mnt/disk2/vmware2<br>
>>>>> >>>>>><br>
>>>>> >>>>>> Brick8: gluster04:/mnt/disk2/vmware2<br>
>>>>> >>>>>><br>
>>>>> >>>>>> Brick9: gluster01:/mnt/disk3/vmware2<br>
>>>>> >>>>>><br>
>>>>> >>>>>> Brick10: gluster03:/mnt/disk3/vmware2<br>
>>>>> >>>>>><br>
>>>>> >>>>>> Brick11: gluster02:/mnt/disk3/vmware2<br>
>>>>> >>>>>><br>
>>>>> >>>>>> Brick12: gluster04:/mnt/disk3/vmware2<br>
>>>>> >>>>>><br>
>>>>> >>>>>> Brick13: gluster01:/mnt/disk4/vmware2<br>
>>>>> >>>>>><br>
>>>>> >>>>>> Brick14: gluster03:/mnt/disk4/vmware2<br>
>>>>> >>>>>><br>
>>>>> >>>>>> Brick15: gluster02:/mnt/disk4/vmware2<br>
>>>>> >>>>>><br>
>>>>> >>>>>> Brick16: gluster04:/mnt/disk4/vmware2<br>
>>>>> >>>>>><br>
>>>>> >>>>>> Brick17: gluster01:/mnt/disk5/vmware2<br>
>>>>> >>>>>><br>
>>>>> >>>>>> Brick18: gluster03:/mnt/disk5/vmware2<br>
>>>>> >>>>>><br>
>>>>> >>>>>> Brick19: gluster02:/mnt/disk5/vmware2<br>
>>>>> >>>>>><br>
>>>>> >>>>>> Brick20: gluster04:/mnt/disk5/vmware2<br>
>>>>> >>>>>><br>
>>>>> >>>>>> Brick21: gluster01:/mnt/disk6/vmware2<br>
>>>>> >>>>>><br>
>>>>> >>>>>> Brick22: gluster03:/mnt/disk6/vmware2<br>
>>>>> >>>>>><br>
>>>>> >>>>>> Brick23: gluster02:/mnt/disk6/vmware2<br>
>>>>> >>>>>><br>
>>>>> >>>>>> Brick24: gluster04:/mnt/disk6/vmware2<br>
>>>>> >>>>>><br>
>>>>> >>>>>> Brick25: gluster01:/mnt/disk7/vmware2<br>
>>>>> >>>>>><br>
>>>>> >>>>>> Brick26: gluster03:/mnt/disk7/vmware2<br>
>>>>> >>>>>><br>
>>>>> >>>>>> Brick27: gluster02:/mnt/disk7/vmware2<br>
>>>>> >>>>>><br>
>>>>> >>>>>> Brick28: gluster04:/mnt/disk7/vmware2<br>
>>>>> >>>>>><br>
>>>>> >>>>>> Brick29: gluster01:/mnt/disk8/vmware2<br>
>>>>> >>>>>><br>
>>>>> >>>>>> Brick30: gluster03:/mnt/disk8/vmware2<br>
>>>>> >>>>>><br>
>>>>> >>>>>> Brick31: gluster02:/mnt/disk8/vmware2<br>
>>>>> >>>>>><br>
>>>>> >>>>>> Brick32: gluster04:/mnt/disk8/vmware2<br>
>>>>> >>>>>><br>
>>>>> >>>>>> Brick33: gluster01:/mnt/disk9/vmware2<br>
>>>>> >>>>>><br>
>>>>> >>>>>> Brick34: gluster03:/mnt/disk9/vmware2<br>
>>>>> >>>>>><br>
>>>>> >>>>>> Brick35: gluster02:/mnt/disk9/vmware2<br>
>>>>> >>>>>><br>
>>>>> >>>>>> Brick36: gluster04:/mnt/disk9/vmware2<br>
>>>>> >>>>>><br>
>>>>> >>>>>> Brick37: gluster01:/mnt/disk10/vmware2<br>
>>>>> >>>>>><br>
>>>>> >>>>>> Brick38: gluster03:/mnt/disk10/vmware2<br>
>>>>> >>>>>><br>
>>>>> >>>>>> Brick39: gluster02:/mnt/disk10/vmware2<br>
>>>>> >>>>>><br>
>>>>> >>>>>> Brick40: gluster04:/mnt/disk10/vmware2<br>
>>>>> >>>>>><br>
>>>>> >>>>>> Brick41: gluster01:/mnt/disk11/vmware2<br>
>>>>> >>>>>><br>
>>>>> >>>>>> Brick42: gluster03:/mnt/disk11/vmware2<br>
>>>>> >>>>>><br>
>>>>> >>>>>> Brick43: gluster02:/mnt/disk11/vmware2<br>
>>>>> >>>>>><br>
>>>>> >>>>>> Brick44: gluster04:/mnt/disk11/vmware2<br>
>>>>> >>>>>><br>
>>>>> >>>>>> Options Reconfigured:<br>
>>>>> >>>>>><br>
>>>>> >>>>>> cluster.server-quorum-type: server<br>
>>>>> >>>>>><br>
>>>>> >>>>>> nfs.disable: on<br>
>>>>> >>>>>><br>
>>>>> >>>>>> performance.readdir-ahead: on<br>
>>>>> >>>>>><br>
>>>>> >>>>>> transport.address-family: inet<br>
>>>>> >>>>>><br>
>>>>> >>>>>> performance.quick-read: off<br>
>>>>> >>>>>><br>
>>>>> >>>>>> performance.read-ahead: off<br>
>>>>> >>>>>><br>
>>>>> >>>>>> performance.io-cache: off<br>
>>>>> >>>>>><br>
>>>>> >>>>>> performance.stat-prefetch: off<br>
>>>>> >>>>>><br>
>>>>> >>>>>> cluster.eager-lock: enable<br>
>>>>> >>>>>><br>
>>>>> >>>>>> network.remote-dio: enable<br>
>>>>> >>>>>><br>
>>>>> >>>>>> features.shard: on<br>
>>>>> >>>>>><br>
>>>>> >>>>>> cluster.data-self-heal-<wbr>algorithm: full<br>
>>>>> >>>>>><br>
>>>>> >>>>>> features.cache-invalidation: on<br>
>>>>> >>>>>><br>
>>>>> >>>>>> ganesha.enable: on<br>
>>>>> >>>>>><br>
>>>>> >>>>>> features.shard-block-size: 256MB<br>
>>>>> >>>>>><br>
>>>>> >>>>>> client.event-threads: 2<br>
>>>>> >>>>>><br>
>>>>> >>>>>> server.event-threads: 2<br>
>>>>> >>>>>><br>
>>>>> >>>>>> cluster.favorite-child-policy: size<br>
>>>>> >>>>>><br>
>>>>> >>>>>> storage.build-pgfid: off<br>
>>>>> >>>>>><br>
>>>>> >>>>>> network.ping-timeout: 5<br>
>>>>> >>>>>><br>
>>>>> >>>>>> cluster.enable-shared-storage: enable<br>
>>>>> >>>>>><br>
>>>>> >>>>>> nfs-ganesha: enable<br>
>>>>> >>>>>><br>
>>>>> >>>>>> cluster.server-quorum-ratio: 51%<br>
>>>>> >>>>>><br>
>>>>> >>>>>> Adding bricks:<br>
>>>>> >>>>>><br>
>>>>> >>>>>> gluster volume add-brick vmware2 replica 2<br>
>>>>> >>>>>> gluster01:/mnt/disk11/vmware2 gluster03:/mnt/disk11/vmware2<br>
>>>>> >>>>>> gluster02:/mnt/disk11/vmware2 gluster04:/mnt/disk11/vmware2<br>
>>>>> >>>>>><br>
>>>>> >>>>>> starting fix layout:<br>
>>>>> >>>>>><br>
>>>>> >>>>>> gluster volume rebalance vmware2 fix-layout start<br>
>>>>> >>>>>><br>
>>>>> >>>>>> Starting rebalance:<br>
>>>>> >>>>>><br>
>>>>> >>>>>> gluster volume rebalance vmware2 start<br>
>>>>> >>>>>><br>
>>>>> >>>>>><br>
>>>>> >>>>>> --<br>
>>>>> >>>>>><br>
>>>>> >>>>>> Respectfully<br>
>>>>> >>>>>> Mahdi A. Mahdi<br>
>>>>> >>>>>><br>
>>>>> >>>>>> ______________________________<wbr>_________________<br>
>>>>> >>>>>> Gluster-users mailing list<br>
>>>>> >>>>>> <a href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a><br>
>>>>> >>>>>> <a href="http://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">http://lists.gluster.org/<wbr>mailman/listinfo/gluster-users</a><br>
>>>>> >>>>>><br>
>>>>> >>>>>><br>
>>>>> >>>>>><br>
>>>>> >>>>>><br>
>>>>> >>>>>><br>
>>>>> >>>>>><br>
>>>>> >>>>>><br>
>>>>> >>>>>><br>
>>>>> >>>>>><br>
>>>>> >>>>><br>
>>>>> >>>>><br>
>>>>> >>>>> ______________________________<wbr>_________________<br>
>>>>> >>>>> Gluster-users mailing list<br>
>>>>> >>>>> <a href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a><br>
>>>>> >>>>> <a href="http://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">http://lists.gluster.org/<wbr>mailman/listinfo/gluster-users</a><br>
>>>>> >>>><br>
>>>>> >>>><br>
>>>>> >>>><br>
>>>>> >>>><br>
>>>>> >>>> --<br>
>>>>> >>>> Pranith<br>
>>>>> >>><br>
>>>>> >>><br>
>>>>> >>><br>
>>>>> >>> ______________________________<wbr>_________________<br>
>>>>> >>> Gluster-users mailing list<br>
>>>>> >>> <a href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a><br>
>>>>> >>> <a href="http://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">http://lists.gluster.org/<wbr>mailman/listinfo/gluster-users</a><br>
>>>>> ><br>
>>>>> ><br>
>>><br>
>>><br>
>>><br>
>>><br>
>>> --<br>
>>> Pranith<br>
><br>
><br>
><br>
><br>
> --<br>
> Pranith<br>
><br>
> ______________________________<wbr>_________________<br>
> Gluster-users mailing list<br>
> <a href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a><br>
> <a href="http://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">http://lists.gluster.org/<wbr>mailman/listinfo/gluster-users</a><br>
</div></div></blockquote></div><br><br clear="all"><br>-- <br><div class="gmail_signature" data-smartmail="gmail_signature"><div dir="ltr">Pranith<br></div></div>
</div>