<html><body><div style="font-family: times new roman, new york, times, serif; font-size: 12pt; color: #000000"><div><br></div><div>Yes, you should file a bug to track this issue and to share information.<br></div><div>Also, I would like to have logs which are present in /var/log/messages, specially mount logs with name mnt.log or something.<br></div><div><br></div><div>Following are the points I would like to bring in to your notice-<br></div><div><br></div><div>1 - Are you sure that all the bricks are UP?<br></div><div>2 - Is there any connection issues?<br></div><div>3 - It is possible that there is a bug which caused crash. So please check for core dump created while doing mount and you saw ENOTCONN error.<br></div><div>4 - I am not very much aware of armhf and have not run glusterfs on this hardware. So, we need to see if there is anything in code which is <br></div><div>stopping us to run glusterfs on this architecture and setup.<br></div><div>5 - Please provide the output of gluster v info and gluster v status for the volume in BZ.<br></div><div><br></div><div>---<br></div><div>Ashish<br></div><div><br></div><hr id="zwchr"><div style="color:#000;font-weight:normal;font-style:normal;text-decoration:none;font-family:Helvetica,Arial,sans-serif;font-size:12pt;" data-mce-style="color: #000; font-weight: normal; font-style: normal; text-decoration: none; font-family: Helvetica,Arial,sans-serif; font-size: 12pt;"><b>From: </b>"Fox" <foxxz.net@gmail.com><br><b>To: </b>gluster-users@gluster.org<br><b>Sent: </b>Friday, August 3, 2018 9:51:30 AM<br><b>Subject: </b>[Gluster-users] Disperse volumes on armhf<br><div><br></div><div dir="ltr"><div>Just wondering if anyone else is running into the same behavior with disperse volumes described below and what I might be able to do about it.</div><div><br></div><div>I am using ubuntu 18.04LTS on Odroid HC-2 hardware (armhf) and have installed gluster 4.1.2 via PPA. I have 12 member nodes each with a single brick. I can successfully create a working volume via the command:</div><div><br></div><div>gluster volume create testvol1 disperse 12 redundancy 4 gluster01:/exports/sda/brick1/testvol1 gluster02:/exports/sda/brick1/testvol1 gluster03:/exports/sda/brick1/testvol1 gluster04:/exports/sda/brick1/testvol1 gluster05:/exports/sda/brick1/testvol1 gluster06:/exports/sda/brick1/testvol1 gluster07:/exports/sda/brick1/testvol1 gluster08:/exports/sda/brick1/testvol1 gluster09:/exports/sda/brick1/testvol1 gluster10:/exports/sda/brick1/testvol1 gluster11:/exports/sda/brick1/testvol1 gluster12:/exports/sda/brick1/testvol1</div><div><br></div><div>And start the volume:</div><div><br></div><div>gluster volume start testvol1<br></div><div><br></div><div>Mounting the volume on an x86-64 system it performs as expected.</div><div><br></div><div>Mounting the same volume on an armhf system (such as one of the cluster members) I can create directories but trying to create a file I get an error and the file system unmounts/crashes:</div><div>root@gluster01:~# mount -t glusterfs gluster01:/testvol1 /mnt<br>root@gluster01:~# cd /mnt<br>root@gluster01:/mnt# ls<br>root@gluster01:/mnt# mkdir test<br>root@gluster01:/mnt# cd test<br></div><div>root@gluster01:/mnt/test# cp /root/notes.txt ./<br>cp: failed to close './notes.txt': Software caused connection abort<br>root@gluster01:/mnt/test# ls<br>ls: cannot open directory '.': Transport endpoint is not connected</div><div><br></div><div>I get many of these in the glusterfsd.log:<br></div><div>The message "W [MSGID: 101088] [common-utils.c:4316:gf_backtrace_save] 0-management: Failed to save the backtrace." repeated 100 times between [2018-08-03 04:06:39.904166] and [2018-08-03 04:06:57.521895]<br></div><div><br></div><div><br></div><div>Furthermore, if a cluster member ducks out (reboots, loses connection, etc) and needs healing the self heal daemon logs messages similar to that above and can not heal - no disk activity (verified via iotop) though very high CPU usage and the volume heal info command indicates the volume needs healing.</div><div><br></div><div><br></div><div>I tested all of the above in virtual environments using x86-64 VMs and could self heal as expected.</div><div><br></div><div>Again this only happens when using disperse volumes. Should I be filing a bug report instead?<br></div></div><br>_______________________________________________<br>Gluster-users mailing list<br>Gluster-users@gluster.org<br>https://lists.gluster.org/mailman/listinfo/gluster-users</div><div><br></div></div></body></html>