That is not normal.<div>Which version are you using ?</div><div><br></div><div><br></div><div>Can you provide the output from all bricks (including the arbiter):</div><div>getfattr -d -m . -e hex  /BRICK/PATH/TO/output_21<br> <br>Troubleshooting and restoring the files should be your secondary tasks, so you should focus on stabilizing the cluster.</div><div><br></div><div>First, enable debug log for bricks if you have the space (see https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.5/html/administration_guide/configuring_the_log_level ) to troubleshoot the dying bricks.</div><div><br></div><div>Best Regards,</div><div>Strahil Nikolov</div><div><br></div><div> <blockquote style="margin: 0 0 20px 0;"> <div style="font-family:Roboto, sans-serif; color:#6D00F6;"> <div>On Mon, Feb 13, 2023 at 13:21, Diego Zuccato</div><div><diego.zuccato@unibo.it> wrote:</div> </div> <div style="padding: 10px 0 0 20px; margin: 10px 0 0 0; border-left: 1px solid #6D00F6;"> My volume is replica 3 arbiter 1, maybe that makes a difference?<br clear="none">Bricks processes tend to die quite often (I have to restart glusterd at <br clear="none">least once a day because "gluster v info | grep ' N '" reports at least <br clear="none">one missing brick; sometimes even if all bricks are reported up I have <br clear="none">to kill all glusterfs[d] processes and restart glusterd).<br clear="none"><br clear="none">The 3 servers have 192GB RAM (that should be way more than enough!), 30 <br clear="none">data bricks and 15 arbiters (the arbiters share a single SSD).<br clear="none"><br clear="none">And I noticed that some "stale file handle" are not reported by heal info.<br clear="none"><br clear="none"><a shape="rect" ymailto="mailto:root@str957-cluster" href="mailto:root@str957-cluster">root@str957-cluster</a>:/# ls -l <br clear="none">/scratch/extra/m******/PNG/PNGQuijote/ModGrav/fNL40/<br clear="none">ls: cannot access <br clear="none">'/scratch/extra/m******/PNG/PNGQuijote/ModGrav/fNL40/output_21': Stale <br clear="none">file handle<br clear="none">total 40<br clear="none">d?????????  ? ?            ?               ?            ? output_21<br clear="none">...<br clear="none">but "gluster v heal cluster_data info |grep output_21" returns nothing. :(<br clear="none"><br clear="none">Seems the other stale handles either got corrected by subsequent 'stat's <br clear="none">or became I/O errors.<br clear="none"><br clear="none">Diego.<br clear="none"><br clear="none">Il 12/02/2023 21:34, Strahil Nikolov ha scritto:<br clear="none">> The 2-nd error indicates conflicts between the nodes. The only way that <br clear="none">> could happen on replica 3 is gfid conflict (file/dir was renamed or <br clear="none">> recreated).<br clear="none">> <br clear="none">> Are you sure that all bricks are online? Usually 'Transport endpoint is <br clear="none">> not connected' indicates a brick down situation.<br clear="none">> <br clear="none">> First start with all stale file handles:<br clear="none">> check md5sum on all bricks. If it differs somewhere, delete the gfid and <br clear="none">> move the file away from the brick and check in FUSE. If it's fine , <br clear="none">> touch it and the FUSE client will "heal" it.<br clear="none">> <br clear="none">> Best Regards,<br clear="none">> Strahil Nikolov<br clear="none">> <br clear="none">> <br clear="none">> <br clear="none">>     On Tue, Feb 7, 2023 at 16:33, Diego Zuccato<br clear="none">>     <<a shape="rect" ymailto="mailto:diego.zuccato@unibo.it" href="mailto:diego.zuccato@unibo.it">diego.zuccato@unibo.it</a>> wrote:<br clear="none">>     The contents do not match exactly, but the only difference is the<br clear="none">>     "option shared-brick-count" line that sometimes is 0 and sometimes 1.<br clear="none">> <br clear="none">>     The command you gave could be useful for the files that still needs<br clear="none">>     healing with the source still present, but the files related to the<br clear="none">>     stale gfids have been deleted, so "find -samefile" won't find anything.<br clear="none">> <br clear="none">>     For the other files reported by heal info, I saved the output to<br clear="none">>     'healinfo', then:<br clear="none">>        for T in $(grep '^/' healinfo |sort|uniq); do stat /mnt/scratch$T ><br clear="none">>     /dev/null; done<br clear="none">> <br clear="none">>     but I still see a lot of 'Transport endpoint is not connected' and<br clear="none">>     'Stale file handle' errors :( And many 'No such file or directory'...<br clear="none">> <br clear="none">>     I don't understand the first two errors, since /mnt/scratch have been<br clear="none">>     freshly mounted after enabling client healing, and gluster v info does<br clear="none">>     not highlight unconnected/down bricks.<br clear="none">> <br clear="none">>     Diego<br clear="none">> <br clear="none">>     Il 06/02/2023 22:46, Strahil Nikolov ha scritto:<br clear="none">>      > I'm not sure if the md5sum has to match , but at least the content<br clear="none">>      > should do.<br clear="none">>      > In modern versions of GlusterFS the client side healing is<br clear="none">>     disabled ,<br clear="none">>      > but it's worth trying.<br clear="none">>      > You will need to enable cluster.metadata-self-heal,<br clear="none">>      > cluster.data-self-heal and cluster.entry-self-heal and then create a<br clear="none">>      > small one-liner that identifies the names of the files/dirs from the<br clear="none">>      > volume heal ,so you can stat them through the FUSE.<br clear="none">>      ><br clear="none">>      > Something like this:<br clear="none">>      ><br clear="none">>      ><br clear="none">>      > for i in $(gluster volume heal <VOL> info | awk -F '<gfid:|>'<br clear="none">>     '/gfid:/<br clear="none">>      > {print $2}'); do find /PATH/TO/BRICK/ -samefile<br clear="none">>      > /PATH/TO/BRICK/.glusterfs/${i:0:2}/${i:2:2}/$i | awk '!/.glusterfs/<br clear="none">>      > {gsub("/PATH/TO/BRICK", "stat /MY/FUSE/MOUNTPOINT", $0); print<br clear="none">>     $0}' ; done<br clear="none">>      ><br clear="none">>      > Then Just copy paste the output and you will trigger the client side<br clear="none">>      > heal only on the affected gfids.<br clear="none">>      ><br clear="none">>      > Best Regards,<br clear="none">>      > Strahil Nikolov<br clear="none">>      > В понеделник, 6 февруари 2023 г., 10:19:02 ч. Гринуич+2, Diego<br clear="none">>     Zuccato<br clear="none">>      > <<a shape="rect" ymailto="mailto:diego.zuccato@unibo.it" href="mailto:diego.zuccato@unibo.it">diego.zuccato@unibo.it</a> <mailto:<a shape="rect" ymailto="mailto:diego.zuccato@unibo.it" href="mailto:diego.zuccato@unibo.it">diego.zuccato@unibo.it</a>>> написа:<br clear="none">>      ><br clear="none">>      ><br clear="none">>      > Ops... Reincluding the list that got excluded in my previous<br clear="none">>     answer :(<br clear="none">>      ><br clear="none">>      > I generated md5sums of all files in vols/ on clustor02 and<br clear="none">>     compared to<br clear="none">>      > the other nodes (clustor00 and clustor01).<br clear="none">>      > There are differences in volfiles (shouldn't it always be 1,<br clear="none">>     since every<br clear="none">>      > data brick is on its own fs? quorum bricks, OTOH, share a single<br clear="none">>      > partition on SSD and should always be 15, but in both cases sometimes<br clear="none">>      > it's 0).<br clear="none">>      ><br clear="none">>      > I nearly got a stroke when I saw diff output for 'info' files,<br clear="none">>     but once<br clear="none">>      > I sorted 'em their contents matched. Pfhew!<br clear="none">>      ><br clear="none">>      > Diego<br clear="none">>      ><br clear="none">>      > Il 03/02/2023 19:01, Strahil Nikolov ha scritto:<br clear="none">>      >  > This one doesn't look good:<br clear="none">>      >  ><br clear="none">>      >  ><br clear="none">>      >  > [2023-02-03 07:45:46.896924 +0000] E [MSGID: 114079]<br clear="none">>      >  > [client-handshake.c:1253:client_query_portmap]<br clear="none">>     0-cluster_data-client-48:<br clear="none">>      >  > remote-subvolume not set in volfile []<br clear="none">>      >  ><br clear="none">>      >  ><br clear="none">>      >  > Can you compare all vol files in /var/lib/glusterd/vols/<br clear="none">>     between the<br clear="none">>      > nodes ?<br clear="none">>      >  > I have the suspicioun that there is a vol file mismatch (maybe<br clear="none">>      >  > /var/lib/glusterd/vols/<VOLUME_NAME>/*-shd.vol).<br clear="none">>      >  ><br clear="none">>      >  > Best Regards,<br clear="none">>      >  > Strahil Nikolov<br clear="none">>      >  ><br clear="none">>      >  >    On Fri, Feb 3, 2023 at 12:20, Diego Zuccato<br clear="none">>      >  >    <<a shape="rect" ymailto="mailto:diego.zuccato@unibo.it" href="mailto:diego.zuccato@unibo.it">diego.zuccato@unibo.it</a> <mailto:<a shape="rect" ymailto="mailto:diego.zuccato@unibo.it" href="mailto:diego.zuccato@unibo.it">diego.zuccato@unibo.it</a>><br clear="none">>     <mailto:<a shape="rect" ymailto="mailto:diego.zuccato@unibo.it" href="mailto:diego.zuccato@unibo.it">diego.zuccato@unibo.it</a> <mailto:<a shape="rect" ymailto="mailto:diego.zuccato@unibo.it" href="mailto:diego.zuccato@unibo.it">diego.zuccato@unibo.it</a>>>> wrote:<br clear="none">>      >  >    Can't see anything relevant in glfsheal log, just messages<br clear="none">>     related to<br clear="none">>      >  >    the crash of one of the nodes (the one that had the mobo<br clear="none">>     replaced... I<br clear="none">>      >  >    fear some on-disk structures could have been silently<br clear="none">>     damaged by RAM<br clear="none">>      >  >    errors and that makes gluster processes crash, or it's just<br clear="none">>     an issue<br clear="none">>      >  >    with enabling brick-multiplex).<br clear="none">>      >  >    -8<--<br clear="none">>      >  >    [2023-02-03 07:45:46.896924 +0000] E [MSGID: 114079]<br clear="none">>      >  >    [client-handshake.c:1253:client_query_portmap]<br clear="none">>      >  >    0-cluster_data-client-48:<br clear="none">>      >  >    remote-subvolume not set in volfile []<br clear="none">>      >  >    [2023-02-03 07:45:46.897282 +0000] E<br clear="none">>      >  >    [rpc-clnt.c:331:saved_frames_unwind] (--><br clear="none">>      >  ><br clear="none">>      ><br clear="none">>     /lib/x86_64-linux-gnu/libglusterfs.so.0(_gf_log_callingfn+0x195)[0x7fce0c867b95]<br clear="none">>      >  >    (--><br clear="none">>     /lib/x86_64-linux-gnu/libgfrpc.so.0(+0x72fc)[0x7fce0c0ca2fc] (--><br clear="none">>      >  ><br clear="none">>      ><br clear="none">>     /lib/x86_64-linux-gnu/libgfrpc.so.0(rpc_clnt_connection_cleanup+0x109)[0x7fce0c0d2419]<br clear="none">>      >  >    (--><br clear="none">>     /lib/x86_64-linux-gnu/libgfrpc.so.0(+0x10308)[0x7fce0c0d3308]<br clear="none">>      > (--><br clear="none">>      >  ><br clear="none">>      ><br clear="none">>     /lib/x86_64-linux-gnu/libgfrpc.so.0(rpc_transport_notify+0x26)[0x7fce0c0ce7e6]<br clear="none">>      >  >    ))))) 0-cluster_data-client-48: forced unwinding frame<br clear="none">>     type(GF-DUMP)<br clear="none">>      >  >    op(NULL(2)) called at 2023-02-03 07:45:46.891054 +0000<br clear="none">>     (xid=0x13)<br clear="none">>      >  >    -8<--<br clear="none">>      >  ><br clear="none">>      >  >    Well, actually I *KNOW* the files outside .glusterfs have<br clear="none">>     been deleted<br clear="none">>      >  >    (by me :) ). That's why I call those 'stale' gfids.<br clear="none">>      >  >    Affected entries under .glusterfs have usually link count =<br clear="none">>     1 =><br clear="none">>      >  >    nothing<br clear="none">>      >  >    'find' can find.<br clear="none">>      >  >    Since I already recovered those files (before deleting from<br clear="none">>     bricks),<br clear="none">>      >  >    can<br clear="none">>      >  >    .glusterfs entries be deleted too or should I check<br clear="none">>     something else?<br clear="none">>      >  >    Maybe I should create a script that finds all files/dirs (not<br clear="none">>      > symlinks,<br clear="none">>      >  >    IIUC) in .glusterfs on all bricks/arbiters and moves 'em to<br clear="none">>     a temp<br clear="none">>      > dir?<br clear="none">>      >  ><br clear="none">>      >  >    Diego<br clear="none">>      >  ><br clear="none">>      >  >    Il 02/02/2023 23:35, Strahil Nikolov ha scritto:<br clear="none">>      >  >      > Any issues reported in /var/log/glusterfs/glfsheal-*.log ?<br clear="none">>      >  >      ><br clear="none">>      >  >      > The easiest way to identify the affected entries is to run:<br clear="none">>      >  >      > find /FULL/PATH/TO/BRICK/ -samefile<br clear="none">>      >  >      ><br clear="none">>      >  ><br clear="none">>      ><br clear="none">>     /FULL/PATH/TO/BRICK/.glusterfs/57/e4/57e428c7-6bed-4eb3-b9bd-02ca4c46657a<br clear="none">>      >  >      ><br clear="none">>      >  >      ><br clear="none">>      >  >      > Best Regards,<br clear="none">>      >  >      > Strahil Nikolov<br clear="none">>      >  >      ><br clear="none">>      >  >      ><br clear="none">>      >  >      > В вторник, 31 януари 2023 г., 11:58:24 ч. Гринуич+2,<br clear="none">>     Diego Zuccato<br clear="none">>      >  >      > <<a shape="rect" ymailto="mailto:diego.zuccato@unibo.it" href="mailto:diego.zuccato@unibo.it">diego.zuccato@unibo.it</a> <mailto:<a shape="rect" ymailto="mailto:diego.zuccato@unibo.it" href="mailto:diego.zuccato@unibo.it">diego.zuccato@unibo.it</a>><br clear="none">>     <mailto:<a shape="rect" ymailto="mailto:diego.zuccato@unibo.it" href="mailto:diego.zuccato@unibo.it">diego.zuccato@unibo.it</a> <mailto:<a shape="rect" ymailto="mailto:diego.zuccato@unibo.it" href="mailto:diego.zuccato@unibo.it">diego.zuccato@unibo.it</a>>><br clear="none">>      > <mailto:<a shape="rect" ymailto="mailto:diego.zuccato@unibo.it" href="mailto:diego.zuccato@unibo.it">diego.zuccato@unibo.it</a> <mailto:<a shape="rect" ymailto="mailto:diego.zuccato@unibo.it" href="mailto:diego.zuccato@unibo.it">diego.zuccato@unibo.it</a>><br clear="none">>     <mailto:<a shape="rect" ymailto="mailto:diego.zuccato@unibo.it" href="mailto:diego.zuccato@unibo.it">diego.zuccato@unibo.it</a> <mailto:<a shape="rect" ymailto="mailto:diego.zuccato@unibo.it" href="mailto:diego.zuccato@unibo.it">diego.zuccato@unibo.it</a>>>>><br clear="none">>     написа:<br clear="none">>      >  >      ><br clear="none">>      >  >      ><br clear="none">>      >  >      > Hello all.<br clear="none">>      >  >      ><br clear="none">>      >  >      > I've had one of the 3 nodes serving a "replica 3<br clear="none">>     arbiter 1"<br clear="none">>      > down for<br clear="none">>      >  >      > some days (apparently RAM issues, but actually failing<br clear="none">>     mobo).<br clear="none">>      >  >      > The other nodes have had some issues (RAM exhaustion,<br clear="none">>     old problem<br clear="none">>      >  >      > already ticketed but still no solution) and some brick<br clear="none">>     processes<br clear="none">>      >  >      > coredumped. Restarting the processes allowed the<br clear="none">>     cluster to<br clear="none">>      > continue<br clear="none">>      >  >      > working. Mostly.<br clear="none">>      >  >      ><br clear="none">>      >  >      > After the third server got fixed I started a heal, but<br clear="none">>     files<br clear="none">>      >  >    didn't get<br clear="none">>      >  >      > healed and count (by "ls -l<br clear="none">>      >  >      > /srv/bricks/*/d/.glusterfs/indices/xattrop/|grep ^-|wc<br clear="none">>     -l")<br clear="none">>      > did not<br clear="none">>      >  >      > decrease over 2 days. So, to recover I copied files<br clear="none">>     from bricks<br clear="none">>      >  >    to temp<br clear="none">>      >  >      > storage (keeping both copies of conflicting files with<br clear="none">>     different<br clear="none">>      >  >      > contents), removed files on bricks and arbiters, and<br clear="none">>     finally<br clear="none">>      >  >    copied back<br clear="none">>      >  >      > from temp storage to the volume.<br clear="none">>      >  >      ><br clear="none">>      >  >      > Now the files are accessible but I still see lots of<br clear="none">>     entries like<br clear="none">>      >  >      > <gfid:57e428c7-6bed-4eb3-b9bd-02ca4c46657a><br clear="none">>      >  >      ><br clear="none">>      >  >      > IIUC that's due to a mismatch between .glusterfs/<br clear="none">>     contents and<br clear="none">>      > normal<br clear="none">>      >  >      > hierarchy. Is there some tool to speed up the cleanup?<br clear="none">>      >  >      ><br clear="none">>      >  >      > Tks.<br clear="none">>      >  >      ><br clear="none">>      >  >      > --<br clear="none">>      >  >      > Diego Zuccato<br clear="none">>      >  >      > DIFA - Dip. di Fisica e Astronomia<br clear="none">>      >  >      > Servizi Informatici<br clear="none">>      >  >      > Alma Mater Studiorum - Università di Bologna<br clear="none">>      >  >      > V.le Berti-Pichat 6/2 - 40127 Bologna - Italy<br clear="none">>      >  >      > tel.: +39 051 20 95786<br clear="none">>      >  >      > ________<br clear="none">>      >  >      ><br clear="none">>      >  >      ><br clear="none">>      >  >      ><br clear="none">>      >  >      > Community Meeting Calendar:<br clear="none">>      >  >      ><br clear="none">>      >  >      > Schedule -<br clear="none">>      >  >      > Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC<br clear="none">>      >  >      > Bridge: <a shape="rect" href="https://meet.google.com/cpu-eiue-hvk" target="_blank">https://meet.google.com/cpu-eiue-hvk</a><br clear="none">>     <<a shape="rect" href="https://meet.google.com/cpu-eiue-hvk " target="_blank">https://meet.google.com/cpu-eiue-hvk </a>><br clear="none">>      > <<a shape="rect" href="https://meet.google.com/cpu-eiue-hvk" target="_blank">https://meet.google.com/cpu-eiue-hvk</a><br clear="none">>     <<a shape="rect" href="https://meet.google.com/cpu-eiue-hvk" target="_blank">https://meet.google.com/cpu-eiue-hvk</a>>><br clear="none">>      >  >    <<a shape="rect" href="https://meet.google.com/cpu-eiue-hvk" target="_blank">https://meet.google.com/cpu-eiue-hvk</a><br clear="none">>     <<a shape="rect" href="https://meet.google.com/cpu-eiue-hvk " target="_blank">https://meet.google.com/cpu-eiue-hvk </a>><br clear="none">>      > <<a shape="rect" href="https://meet.google.com/cpu-eiue-hvk" target="_blank">https://meet.google.com/cpu-eiue-hvk</a><br clear="none">>     <<a shape="rect" href="https://meet.google.com/cpu-eiue-hvk " target="_blank">https://meet.google.com/cpu-eiue-hvk </a>>>><br clear="none">>      >  >      > <<a shape="rect" href="https://meet.google.com/cpu-eiue-hvk" target="_blank">https://meet.google.com/cpu-eiue-hvk</a><br clear="none">>     <<a shape="rect" href="https://meet.google.com/cpu-eiue-hvk " target="_blank">https://meet.google.com/cpu-eiue-hvk </a>><br clear="none">>      > <<a shape="rect" href="https://meet.google.com/cpu-eiue-hvk" target="_blank">https://meet.google.com/cpu-eiue-hvk</a><br clear="none">>     <<a shape="rect" href="https://meet.google.com/cpu-eiue-hvk" target="_blank">https://meet.google.com/cpu-eiue-hvk</a>>><br clear="none">>      >  >    <<a shape="rect" href="https://meet.google.com/cpu-eiue-hvk" target="_blank">https://meet.google.com/cpu-eiue-hvk</a><br clear="none">>     <<a shape="rect" href="https://meet.google.com/cpu-eiue-hvk " target="_blank">https://meet.google.com/cpu-eiue-hvk </a>><br clear="none">>      > <<a shape="rect" href="https://meet.google.com/cpu-eiue-hvk" target="_blank">https://meet.google.com/cpu-eiue-hvk</a><br clear="none">>     <<a shape="rect" href="https://meet.google.com/cpu-eiue-hvk" target="_blank">https://meet.google.com/cpu-eiue-hvk</a>>>>><br clear="none">>      >  >      > Gluster-users mailing list<br clear="none">>      >  >      > <a shape="rect" ymailto="mailto:Gluster-users@gluster.org" href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a><br clear="none">>     <mailto:<a shape="rect" ymailto="mailto:Gluster-users@gluster.org" href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a>> <mailto:<a shape="rect" ymailto="mailto:Gluster-users@gluster.org" href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a><br clear="none">>     <mailto:<a shape="rect" ymailto="mailto:Gluster-users@gluster.org" href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a>>><br clear="none">>      > <mailto:<a shape="rect" ymailto="mailto:Gluster-users@gluster.org" href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a><br clear="none">>     <mailto:<a shape="rect" ymailto="mailto:Gluster-users@gluster.org" href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a>> <mailto:<a shape="rect" ymailto="mailto:Gluster-users@gluster.org" href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a><br clear="none">>     <mailto:<a shape="rect" ymailto="mailto:Gluster-users@gluster.org" href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a>>>><br clear="none">>      >  >    <mailto:<a shape="rect" ymailto="mailto:Gluster-users@gluster.org" href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a><br clear="none">>     <mailto:<a shape="rect" ymailto="mailto:Gluster-users@gluster.org" href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a>><br clear="none">>      > <mailto:<a shape="rect" ymailto="mailto:Gluster-users@gluster.org" href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a><br clear="none">>     <mailto:<a shape="rect" ymailto="mailto:Gluster-users@gluster.org" href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a>>><br clear="none">>     <mailto:<a shape="rect" ymailto="mailto:Gluster-users@gluster.org" href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a> <mailto:<a shape="rect" ymailto="mailto:Gluster-users@gluster.org" href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a>><br clear="none">>      > <mailto:<a shape="rect" ymailto="mailto:Gluster-users@gluster.org" href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a><br clear="none">>     <mailto:<a shape="rect" ymailto="mailto:Gluster-users@gluster.org" href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a>>>>><br clear="none">>      >  >      ><br clear="none">>     <a shape="rect" href="https://lists.gluster.org/mailman/listinfo/gluster-users" target="_blank">https://lists.gluster.org/mailman/listinfo/gluster-users</a><br clear="none">>     <<a shape="rect" href="https://lists.gluster.org/mailman/listinfo/gluster-users " target="_blank">https://lists.gluster.org/mailman/listinfo/gluster-users </a>><br clear="none">>      > <<a shape="rect" href="https://lists.gluster.org/mailman/listinfo/gluster-users" target="_blank">https://lists.gluster.org/mailman/listinfo/gluster-users</a><br clear="none">>     <<a shape="rect" href="https://lists.gluster.org/mailman/listinfo/gluster-users" target="_blank">https://lists.gluster.org/mailman/listinfo/gluster-users</a>>><br clear="none">>      >  >    <<a shape="rect" href="https://lists.gluster.org/mailman/listinfo/gluster-users" target="_blank">https://lists.gluster.org/mailman/listinfo/gluster-users</a><br clear="none">>     <<a shape="rect" href="https://lists.gluster.org/mailman/listinfo/gluster-users " target="_blank">https://lists.gluster.org/mailman/listinfo/gluster-users </a>><br clear="none">>      > <<a shape="rect" href="https://lists.gluster.org/mailman/listinfo/gluster-users" target="_blank">https://lists.gluster.org/mailman/listinfo/gluster-users</a><br clear="none">>     <<a shape="rect" href="https://lists.gluster.org/mailman/listinfo/gluster-users " target="_blank">https://lists.gluster.org/mailman/listinfo/gluster-users </a>>>><br clear="none">>      >  >      ><br clear="none">>     <<a shape="rect" href="https://lists.gluster.org/mailman/listinfo/gluster-users" target="_blank">https://lists.gluster.org/mailman/listinfo/gluster-users</a><br clear="none">>     <<a shape="rect" href="https://lists.gluster.org/mailman/listinfo/gluster-users " target="_blank">https://lists.gluster.org/mailman/listinfo/gluster-users </a>><br clear="none">>      > <<a shape="rect" href="https://lists.gluster.org/mailman/listinfo/gluster-users" target="_blank">https://lists.gluster.org/mailman/listinfo/gluster-users</a><br clear="none">>     <<a shape="rect" href="https://lists.gluster.org/mailman/listinfo/gluster-users" target="_blank">https://lists.gluster.org/mailman/listinfo/gluster-users</a>>><br clear="none">>      >  >    <<a shape="rect" href="https://lists.gluster.org/mailman/listinfo/gluster-users" target="_blank">https://lists.gluster.org/mailman/listinfo/gluster-users</a><br clear="none">>     <<a shape="rect" href="https://lists.gluster.org/mailman/listinfo/gluster-users " target="_blank">https://lists.gluster.org/mailman/listinfo/gluster-users </a>><br clear="none">>      > <<a shape="rect" href="https://lists.gluster.org/mailman/listinfo/gluster-users" target="_blank">https://lists.gluster.org/mailman/listinfo/gluster-users</a><br clear="none">>     <<a shape="rect" href="https://lists.gluster.org/mailman/listinfo/gluster-users" target="_blank">https://lists.gluster.org/mailman/listinfo/gluster-users</a>>>>><div class="yqt5031052909" id="yqtfd64084"><br clear="none">> <br clear="none">>      ><br clear="none">>      >  ><br clear="none">>      >  ><br clear="none">>      >  >    --<br clear="none">>      >  >    Diego Zuccato<br clear="none">>      >  >    DIFA - Dip. di Fisica e Astronomia<br clear="none">>      >  >    Servizi Informatici<br clear="none">>      >  >    Alma Mater Studiorum - Università di Bologna<br clear="none">>      >  >    V.le Berti-Pichat 6/2 - 40127 Bologna - Italy<br clear="none">>      >  >    tel.: +39 051 20 95786<br clear="none">>      >  ><br clear="none">>      ><br clear="none">>      > --<br clear="none">>      > Diego Zuccato<br clear="none">>      > DIFA - Dip. di Fisica e Astronomia<br clear="none">>      > Servizi Informatici<br clear="none">>      > Alma Mater Studiorum - Università di Bologna<br clear="none">>      > V.le Berti-Pichat 6/2 - 40127 Bologna - Italy<br clear="none">>      > tel.: +39 051 20 95786<br clear="none">> <br clear="none">>     -- <br clear="none">>     Diego Zuccato<br clear="none">>     DIFA - Dip. di Fisica e Astronomia<br clear="none">>     Servizi Informatici<br clear="none">>     Alma Mater Studiorum - Università di Bologna<br clear="none">>     V.le Berti-Pichat 6/2 - 40127 Bologna - Italy<br clear="none">>     tel.: +39 051 20 95786<br clear="none">> <br clear="none"><br clear="none">-- <br clear="none">Diego Zuccato<br clear="none">DIFA - Dip. di Fisica e Astronomia<br clear="none">Servizi Informatici<br clear="none">Alma Mater Studiorum - Università di Bologna<br clear="none">V.le Berti-Pichat 6/2 - 40127 Bologna - Italy<br clear="none">tel.: +39 051 20 95786<br clear="none"></div> </div> </blockquote></div>