<div dir="ltr"><div dir="ltr">Hi Anthony,</div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Wed, Sep 8, 2021 at 6:11 PM Anthony Hoppe <<a href="mailto:anthony@vofr.net">anthony@vofr.net</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div><div style="font-family:arial,helvetica,sans-serif;font-size:10pt;color:rgb(0,0,0)"><div>Hi Xavi,<br></div><div><br></div><div>I am working with a distributred-replicated volume. What I've been doing is copying the shards from each node to their own "recovery" directory, discarding shards that are 0 bytes, then comparing the remainder and combining unique shards into a common directory. Then I'd build a sorted list so the shards are sorted numerically adding the "main file" to the top of the list and then have cat run through the list. I had one pair of shards that diff told me were not equal, but their byte size was equivalent. In that case, I'm not sure which is the "correct" shard, but I'd note that and just pick one with the intention of circling back if cat'ing things together didn't work out...which so far I haven't had any luck.<br></div></div></div></blockquote><div><br></div><div>If there's a shard with different contents probably it has a pending heal. If it's a replica 3, most probably 2 of the files should match. In that case this should be the "good" version. Otherwise you will need to check the stat and extended attributes of the files from each brick to see which one is the best.</div><div><br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div><div style="font-family:arial,helvetica,sans-serif;font-size:10pt;color:rgb(0,0,0)"><div></div><div><br></div><div>How can I identify if a shard is not full size? I haven't checked every single shard, but they seem to be 64 MB in size. Would that mean I need to make sure all but the last shard is 64 MB? I suspect this might be my issue.<br></div></div></div></blockquote><div><br></div><div>If you are using the default shard size, they should be 64 MiB (i.e. 67108864 bytes). Any file smaller than that (including the main file, but not the last shard) must be expanded to this size (truncate -s 67108864 <file>). All shards must exist (from 1 to last number). If one is missing you need to create it (touch <file> && truncate -s 67108864 <file>).</div><div><br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div><div style="font-family:arial,helvetica,sans-serif;font-size:10pt;color:rgb(0,0,0)"><div></div><div><br></div><div>Also, is shard 0 what would appear as the actual file (so largefile.raw or whatever)? It seems in my scenario these files are ~48 MB. I assume that means I need to extend it to 64 MB?</div></div></div></blockquote><div><br></div><div>Yes, shard 0 is the main file, and it also needs to be extended to 64 MiB.</div><div><br></div><div>Regards,</div><div><br></div><div>Xavi</div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div><div style="font-family:arial,helvetica,sans-serif;font-size:10pt;color:rgb(0,0,0)"><div><br></div><div>This is all great information. Thanks!<br></div><div><br></div><div>~ Anthony<br></div><div><br></div><div><br></div><hr id="gmail-m_7853114008572755822zwchr"><div><blockquote style="border-left:2px solid rgb(16,16,255);margin-left:5px;padding-left:5px;color:rgb(0,0,0);font-weight:normal;font-style:normal;text-decoration:none;font-family:Helvetica,Arial,sans-serif;font-size:12pt"><b>From: </b>"Xavi Hernandez" <<a href="mailto:jahernan@redhat.com" target="_blank">jahernan@redhat.com</a>><br><b>To: </b>"anthony" <<a href="mailto:anthony@vofr.net" target="_blank">anthony@vofr.net</a>><br><b>Cc: </b>"gluster-users" <<a href="mailto:gluster-users@gluster.org" target="_blank">gluster-users@gluster.org</a>><br><b>Sent: </b>Wednesday, September 8, 2021 1:57:51 AM<br><b>Subject: </b>Re: [Gluster-users] Recovering from remove-brick where shards did not rebalance<br></blockquote></div><div><blockquote style="border-left:2px solid rgb(16,16,255);margin-left:5px;padding-left:5px;color:rgb(0,0,0);font-weight:normal;font-style:normal;text-decoration:none;font-family:Helvetica,Arial,sans-serif;font-size:12pt"><div dir="ltr"><div dir="ltr">Hi Anthony,</div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Tue, Sep 7, 2021 at 8:20 PM Anthony Hoppe <<a href="mailto:anthony@vofr.net" rel="nofollow noopener noreferrer" target="_blank">anthony@vofr.net</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div><div style="font-family:arial,helvetica,sans-serif;font-size:10pt;color:rgb(0,0,0)"><div>I am currently playing with concatenating main file + shards together. Is it safe to assume that a shard with the same ID and sequence number (5da7d7b9-7ff3-48d2-8dcd-4939364bda1f.242 for example) is identical across bricks? That is, I can copy all the shards into a single location overwriting and/or discarding duplicates, then concatenate them together in order? Or is it a more complex?<br></div></div></div></blockquote><br><div>Assuming it's a replicated volume, a given shard should appear on all bricks of the same replicated subvolume. If there were no pending heals, they should all have the same contents (however you can easily check that by running an md5sum (or similar) on each file).</div><br><div>On distributed-replicated volumes it's possible to have the same shard on two different subvolumes. In this case one of the subvolumes contains the real file, and the other a special 0-bytes file with mode '---------T'. You need to take the real file and ignore the second one.</div><br><div>Shards may be smaller than the shard size. In this case you should extend the shard to the shard size before concatenating it with the rest of the shards (for example using "truncate -s"). The last shard may be smaller. It doesn't need to be extended.</div><br><div>Once you have all the shards, you can concatenate them. Note that the first shard of a file (or shard 0) is not inside the .shard directory. You must take it from the location where the file is normally seen.</div><br><div>Regards,</div><br><div>Xavi</div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div><div style="font-family:arial,helvetica,sans-serif;font-size:10pt;color:rgb(0,0,0)"><br><hr id="gmail-m_7853114008572755822gmail-m_-7864145879939946001zwchr"><div><blockquote style="border-left:2px solid rgb(16,16,255);margin-left:5px;padding-left:5px;color:rgb(0,0,0);font-weight:normal;font-style:normal;text-decoration:none;font-family:helvetica,arial,sans-serif;font-size:12pt"><b>From: </b>"anthony" <<a href="mailto:anthony@vofr.net" rel="nofollow noopener noreferrer" target="_blank">anthony@vofr.net</a>><br><b>To: </b>"gluster-users" <<a href="mailto:gluster-users@gluster.org" rel="nofollow noopener noreferrer" target="_blank">gluster-users@gluster.org</a>><br><b>Sent: </b>Tuesday, September 7, 2021 10:18:07 AM<br><b>Subject: </b>Re: [Gluster-users] Recovering from remove-brick where shards did not rebalance<br></blockquote></div><div><blockquote style="border-left:2px solid rgb(16,16,255);margin-left:5px;padding-left:5px;color:rgb(0,0,0);font-weight:normal;font-style:normal;text-decoration:none;font-family:helvetica,arial,sans-serif;font-size:12pt"><div style="font-family:arial,helvetica,sans-serif;font-size:10pt;color:rgb(0,0,0)"><div>I've been playing with re-adding the bricks and here is some interesting behavior.</div><br><div>When I try to force add the bricks to the volume while it's running, I get complaints about one of the bricks already being a member of a volume. If I stop the volume, I can then force-add the bricks. However, the volume won't start without force. Once the volume is force started, all of the bricks remain offline.<br></div><br><div>I feel like I'm close...but not quite there...<br></div><br><hr id="gmail-m_7853114008572755822gmail-m_-7864145879939946001zwchr"><div><blockquote style="border-left:2px solid rgb(16,16,255);margin-left:5px;padding-left:5px;color:rgb(0,0,0);font-weight:normal;font-style:normal;text-decoration:none;font-family:helvetica,arial,sans-serif;font-size:12pt"><b>From: </b>"anthony" <<a href="mailto:anthony@vofr.net" rel="nofollow noopener noreferrer" target="_blank">anthony@vofr.net</a>><br><b>To: </b>"Strahil Nikolov" <<a href="mailto:hunter86_bg@yahoo.com" rel="nofollow noopener noreferrer" target="_blank">hunter86_bg@yahoo.com</a>><br><b>Cc: </b>"gluster-users" <<a href="mailto:gluster-users@gluster.org" rel="nofollow noopener noreferrer" target="_blank">gluster-users@gluster.org</a>><br><b>Sent: </b>Tuesday, September 7, 2021 7:45:44 AM<br><b>Subject: </b>Re: [Gluster-users] Recovering from remove-brick where shards did not rebalance<br></blockquote></div><div><blockquote style="border-left:2px solid rgb(16,16,255);margin-left:5px;padding-left:5px;color:rgb(0,0,0);font-weight:normal;font-style:normal;text-decoration:none;font-family:helvetica,arial,sans-serif;font-size:12pt"><div style="font-family:arial,helvetica,sans-serif;font-size:10pt;color:rgb(0,0,0)"><div>I was contemplating these options, actually, but not finding anything in my research showing someone had tried either before gave me pause.</div><br><div>One thing I wasn't sure about when doing a force add-brick was if gluster would wipe the existing data from the added bricks. Sounds like that may not be the case?</div><br><div>With regards to concatenating the main file + shards, how would I go about identifying the shards that pair with the main file? I see the shards have sequence numbers, but I'm not sure how to match the identifier to the main file.</div><br><div>Thanks!!</div><br><hr id="gmail-m_7853114008572755822gmail-m_-7864145879939946001zwchr"><div><blockquote style="border-left:2px solid rgb(16,16,255);margin-left:5px;padding-left:5px;color:rgb(0,0,0);font-weight:normal;font-style:normal;text-decoration:none;font-family:helvetica,arial,sans-serif;font-size:12pt"><b>From: </b>"Strahil Nikolov" <<a href="mailto:hunter86_bg@yahoo.com" rel="nofollow noopener noreferrer" target="_blank">hunter86_bg@yahoo.com</a>><br><b>To: </b>"anthony" <<a href="mailto:anthony@vofr.net" rel="nofollow noopener noreferrer" target="_blank">anthony@vofr.net</a>>, "gluster-users" <<a href="mailto:gluster-users@gluster.org" rel="nofollow noopener noreferrer" target="_blank">gluster-users@gluster.org</a>><br><b>Sent: </b>Tuesday, September 7, 2021 6:02:36 AM<br><b>Subject: </b>Re: [Gluster-users] Recovering from remove-brick where shards did not rebalance<br></blockquote></div><div><blockquote style="border-left:2px solid rgb(16,16,255);margin-left:5px;padding-left:5px;color:rgb(0,0,0);font-weight:normal;font-style:normal;text-decoration:none;font-family:helvetica,arial,sans-serif;font-size:12pt">The data should be recoverable by concatenating the main file with all shards. Then you can copy the data back via the FUSE mount point.<div id="gmail-m_7853114008572755822gmail-m_-7864145879939946001yMail_cursorElementTracker_1631019624966"><br></div><div id="gmail-m_7853114008572755822gmail-m_-7864145879939946001yMail_cursorElementTracker_1631019625189">I think that some users reported that add-brick with the force option allows to 'undo' the situation and 're-add' the data, but I have never tried that and I cannot guarantee that it will even work.</div><div id="gmail-m_7853114008572755822gmail-m_-7864145879939946001yMail_cursorElementTracker_1631019671394"><br></div><div id="gmail-m_7853114008572755822gmail-m_-7864145879939946001yMail_cursorElementTracker_1631019671588">The simplest way is to recover from a recent backup , but sometimes this leads to a data loss.</div><div id="gmail-m_7853114008572755822gmail-m_-7864145879939946001yMail_cursorElementTracker_1631019697845"><br></div><div id="gmail-m_7853114008572755822gmail-m_-7864145879939946001yMail_cursorElementTracker_1631019698066">Best Regards,</div><div id="gmail-m_7853114008572755822gmail-m_-7864145879939946001yMail_cursorElementTracker_1631019701411">Strahil Nikolov<br> <br> <blockquote style="margin:0px 0px 20px"> <div style="font-family:roboto,sans-serif;color:rgb(109,0,246)"> <div>On Tue, Sep 7, 2021 at 9:29, Anthony Hoppe</div><div><<a href="mailto:anthony@vofr.net" rel="nofollow noopener noreferrer" target="_blank">anthony@vofr.net</a>> wrote:</div> </div> <div style="padding:10px 0px 0px 20px;margin:10px 0px 0px;border-left:1px solid rgb(109,0,246)"> <div dir="ltr">Hello,<br></div><div dir="ltr"><br></div><div dir="ltr">I did a bad thing and did a remove-brick on a set of bricks in a distributed-replicate volume where rebalancing did not successfully rebalance all files. In sleuthing around the various bricks on the 3 node pool, it appears that a number of the files within the volume may have been stored as shards. With that, I'm unsure how to proceed with recovery.<br></div><div dir="ltr"><br></div><div dir="ltr">Is it possible to re-add the removed bricks somehow and then do a heal? Or is there a way to recover data from shards somehow?<br></div><div dir="ltr"><br></div><div dir="ltr">Thanks!<br></div><div dir="ltr">________<br></div><div dir="ltr"><br></div><div dir="ltr"><br></div><div dir="ltr"><br></div><div dir="ltr">Community Meeting Calendar:<br></div><div dir="ltr"><br></div><div dir="ltr">Schedule -<br></div><div dir="ltr">Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC<br></div><div dir="ltr">Bridge: <a href="https://meet.google.com/cpu-eiue-hvk" rel="nofollow noopener noreferrer nofollow noopener noreferrer nofollow noopener noreferrer nofollow noopener noreferrer" target="_blank">https://meet.google.com/cpu-eiue-hvk</a><br></div><div dir="ltr">Gluster-users mailing list<br></div><div dir="ltr"><a href="mailto:Gluster-users@gluster.org" rel="nofollow noopener noreferrer nofollow noopener noreferrer nofollow noopener noreferrer nofollow noopener noreferrer" target="_blank">Gluster-users@gluster.org</a><br></div><div dir="ltr"><a href="https://lists.gluster.org/mailman/listinfo/gluster-users" rel="nofollow noopener noreferrer nofollow noopener noreferrer nofollow noopener noreferrer nofollow noopener noreferrer" target="_blank">https://lists.gluster.org/mailman/listinfo/gluster-users</a><br></div> </div> </blockquote></div></blockquote></div></div></blockquote></div></div><br></blockquote></div></div></div>________<br>
<br>
<br>
<br>
Community Meeting Calendar:<br>
<br>
Schedule -<br>
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC<br>
Bridge: <a href="https://meet.google.com/cpu-eiue-hvk" rel="noreferrer nofollow noopener noreferrer" target="_blank">https://meet.google.com/cpu-eiue-hvk</a><br>
Gluster-users mailing list<br>
<a href="mailto:Gluster-users@gluster.org" rel="nofollow noopener noreferrer" target="_blank">Gluster-users@gluster.org</a><br>
<a href="https://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer nofollow noopener noreferrer" target="_blank">https://lists.gluster.org/mailman/listinfo/gluster-users</a><br>
</blockquote></div></div><br></blockquote></div></div></div></blockquote></div></div>