Best case scenario, you just mount via FUSE on the 'dead' node and start copying.<div id="yMail_cursorElementTracker_1630233660593"><br></div><div id="yMail_cursorElementTracker_1630233660975">Yet, in your case you don't have enough space. I guess you can try on 2 VMs to simulate the failure, rebuild and then forcefully re-add the old brick. It might work, it might not ... at least it's worth trying.</div><div id="yMail_cursorElementTracker_1630233724766"><br></div><div id="yMail_cursorElementTracker_1630233724947"><br></div><div id="yMail_cursorElementTracker_1630233725157">Best Regards,</div><div id="yMail_cursorElementTracker_1630233728675">Strahil Nikolov<br><br><div id="ymail_android_signature"><a id="ymail_android_signature_link" href="https://go.onelink.me/107872968?pid=InProduct&c=Global_Internal_YGrowth_AndroidEmailSig__AndroidUsers&af_wl=ym&af_sub1=Internal&af_sub2=Global_YGrowth&af_sub3=EmailSignature">Sent from Yahoo Mail on Android</a></div> <br> <blockquote style="margin: 0 0 20px 0;"> <div style="font-family:Roboto, sans-serif; color:#6D00F6;"> <div>On Thu, Aug 26, 2021 at 15:27, Taste-Of-IT</div><div><kontakt@taste-of-it.de> wrote:</div> </div> <div style="padding: 10px 0 0 20px; margin: 10px 0 0 0; border-left: 1px solid #6D00F6;"> Hi,<br clear="none">what do you mean? Copy the data from dead node to runnig node and than add the new installed node to existing vol1, after that running rebalance? If so, this is not possible, because node1 has not enough free space to take all from node2.<br clear="none"><br clear="none">thx<br clear="none"><br clear="none">Am 22.08.2021 18:35:33, schrieb Strahil Nikolov:<br clear="none">> Hi,<br clear="none">> <br clear="none">> the best way is to copy the files over the FUSE mount and later add the bricks and rebalance.<br clear="none">> Best Regards,Strahil Nikolov<br clear="none">> <br clear="none">> Sent from Yahoo Mail on Android <br clear="none">>  <br clear="none">>   On Thu, Aug 19, 2021 at 23:04, Taste-Of-IT<<a shape="rect" ymailto="mailto:kontakt@taste-of-it.de" href="mailto:kontakt@taste-of-it.de">kontakt@taste-of-it.de</a>> wrote:   Hello,<br clear="none">> <br clear="none">> i have two nodes with a distributed volume. OS is on a separate disk which crashed on one node. However i can reinstall the os and the raid6 which is used vor the distributed volume was rebuild. The question now is, how to re-add the brick with the volume back to the existing old volume. <br clear="none">> <br clear="none">> If this is not possible what is with this idea: i create a new vol2 with distributed over both nodes and move the files direkt from directory to new volume via nfs-ganesha share?!<br clear="none">> <br clear="none">> thx<br clear="none">> ________<br clear="none">> <br clear="none">> <br clear="none">> <br clear="none">> Community Meeting Calendar:<br clear="none">> <br clear="none">> Schedule -<br clear="none">> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC<br clear="none">> Bridge: <a shape="rect" href="https://meet.google.com/cpu-eiue-hvk" target="_blank">https://meet.google.com/cpu-eiue-hvk</a><br clear="none">> Gluster-users mailing list<br clear="none">> <a shape="rect" ymailto="mailto:Gluster-users@gluster.org" href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a><br clear="none">> <a shape="rect" href="https://lists.gluster.org/mailman/listinfo/gluster-users" target="_blank">https://lists.gluster.org/mailman/listinfo/gluster-users</a><div class="yqt3172420255" id="yqtfd54420"><br clear="none">>   <br clear="none">><br clear="none">________<br clear="none"><br clear="none"><br clear="none"><br clear="none">Community Meeting Calendar:<br clear="none"><br clear="none">Schedule -<br clear="none">Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC<br clear="none">Bridge: <a shape="rect" href="https://meet.google.com/cpu-eiue-hvk" target="_blank">https://meet.google.com/cpu-eiue-hvk</a><br clear="none">Gluster-users mailing list<br clear="none"><a shape="rect" ymailto="mailto:Gluster-users@gluster.org" href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a><br clear="none"><a shape="rect" href="https://lists.gluster.org/mailman/listinfo/gluster-users" target="_blank">https://lists.gluster.org/mailman/listinfo/gluster-users</a><br clear="none"></div> </div> </blockquote></div>