<div dir="ltr">Surprising! Will you be able to reproduce the issue and share the logs if I provide a custom build with more logs?</div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Thu, May 28, 2020 at 1:35 PM Petr Certik <<a href="mailto:petr@certik.cz">petr@certik.cz</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">Thanks for your help! Much appreciated.<br>
<br>
The fsid is the same for all bricks:<br>
<br>
imagegluster1:<br>
/var/lib/glusterd/vols/gv0/bricks/imagegluster1:-data2-brick:brick-fsid=2065<br>
/var/lib/glusterd/vols/gv0/bricks/imagegluster1:-data-brick:brick-fsid=2065<br>
/var/lib/glusterd/vols/gv0/bricks/imagegluster2:-data2-brick:brick-fsid=0<br>
/var/lib/glusterd/vols/gv0/bricks/imagegluster2:-data-brick:brick-fsid=0<br>
/var/lib/glusterd/vols/gv0/bricks/imagegluster3:-data2-brick:brick-fsid=0<br>
/var/lib/glusterd/vols/gv0/bricks/imagegluster3:-data-brick:brick-fsid=0<br>
<br>
imagegluster2:<br>
/var/lib/glusterd/vols/gv0/bricks/imagegluster1:-data2-brick:brick-fsid=0<br>
/var/lib/glusterd/vols/gv0/bricks/imagegluster1:-data-brick:brick-fsid=0<br>
/var/lib/glusterd/vols/gv0/bricks/imagegluster2:-data2-brick:brick-fsid=2065<br>
/var/lib/glusterd/vols/gv0/bricks/imagegluster2:-data-brick:brick-fsid=2065<br>
/var/lib/glusterd/vols/gv0/bricks/imagegluster3:-data2-brick:brick-fsid=0<br>
/var/lib/glusterd/vols/gv0/bricks/imagegluster3:-data-brick:brick-fsid=0<br>
<br>
imagegluster3:<br>
/var/lib/glusterd/vols/gv0/bricks/imagegluster1:-data2-brick:brick-fsid=0<br>
/var/lib/glusterd/vols/gv0/bricks/imagegluster1:-data-brick:brick-fsid=0<br>
/var/lib/glusterd/vols/gv0/bricks/imagegluster2:-data2-brick:brick-fsid=0<br>
/var/lib/glusterd/vols/gv0/bricks/imagegluster2:-data-brick:brick-fsid=0<br>
/var/lib/glusterd/vols/gv0/bricks/imagegluster3:-data2-brick:brick-fsid=2065<br>
/var/lib/glusterd/vols/gv0/bricks/imagegluster3:-data-brick:brick-fsid=2065<br>
<br>
<br>
I already did try restarting the glusterd nodes with no effect, but<br>
that was before the upgrades of client versions.<br>
<br>
Running the "volume set" command did not seem to work either, the<br>
shared-brick-counts are still the same (2).<br>
<br>
However, when restarting a node, I do get an error and a few warnings<br>
in the log: <a href="https://pastebin.com/tqq1FCwZ" rel="noreferrer" target="_blank">https://pastebin.com/tqq1FCwZ</a><br>
<br>
<br>
<br>
On Wed, May 27, 2020 at 3:14 PM Sanju Rakonde <<a href="mailto:srakonde@redhat.com" target="_blank">srakonde@redhat.com</a>> wrote:<br>
><br>
> The shared-brick-count value indicates the number of bricks sharing a file-system. In your case, it should be one, as all the bricks are from different mount points. Can you please share the values of brick-fsid?<br>
><br>
> grep "brick-fsid" /var/lib/glusterd/vols/<volname>/bricks/<br>
><br>
> I tried reproducing this issue in fedora vm's but couldn't hit this. we are seeing this issue on and off but are unable to reproduce in-house. If you see any error messages in glusterd.log please share the log too.<br>
><br>
> Work-around to come out from this situation:<br>
> 1. Restarting the glusterd service on all nodes:<br>
> # systemctl restart glusterd<br>
><br>
> 2. Run set volume command to update vol file:<br>
> # gluster v set <VOLNAME> min-free-disk 11%<br>
><br>
> On Wed, May 27, 2020 at 5:24 PM Petr Certik <<a href="mailto:petr@certik.cz" target="_blank">petr@certik.cz</a>> wrote:<br>
>><br>
>> As far as I remember, there was no version update on the server. It<br>
>> was definitely installed as version 7.<br>
>><br>
>> Shared bricks:<br>
>><br>
>> Server 1:<br>
>><br>
>> /var/lib/glusterd/vols/gv0/gv0.imagegluster1.data2-brick.vol:<br>
>> option shared-brick-count 2<br>
>> /var/lib/glusterd/vols/gv0/gv0.imagegluster1.data-brick.vol: option<br>
>> shared-brick-count 2<br>
>> /var/lib/glusterd/vols/gv0/gv0.imagegluster2.data2-brick.vol:<br>
>> option shared-brick-count 0<br>
>> /var/lib/glusterd/vols/gv0/gv0.imagegluster2.data-brick.vol: option<br>
>> shared-brick-count 0<br>
>> /var/lib/glusterd/vols/gv0/gv0.imagegluster3.data2-brick.vol:<br>
>> option shared-brick-count 0<br>
>> /var/lib/glusterd/vols/gv0/gv0.imagegluster3.data-brick.vol: option<br>
>> shared-brick-count 0<br>
>><br>
>> Server 2:<br>
>><br>
>> /var/lib/glusterd/vols/gv0/gv0.imagegluster1.data2-brick.vol:<br>
>> option shared-brick-count 0<br>
>> /var/lib/glusterd/vols/gv0/gv0.imagegluster1.data-brick.vol: option<br>
>> shared-brick-count 0<br>
>> /var/lib/glusterd/vols/gv0/gv0.imagegluster2.data2-brick.vol:<br>
>> option shared-brick-count 2<br>
>> /var/lib/glusterd/vols/gv0/gv0.imagegluster2.data-brick.vol: option<br>
>> shared-brick-count 2<br>
>> /var/lib/glusterd/vols/gv0/gv0.imagegluster3.data2-brick.vol:<br>
>> option shared-brick-count 0<br>
>> /var/lib/glusterd/vols/gv0/gv0.imagegluster3.data-brick.vol: option<br>
>> shared-brick-count 0<br>
>><br>
>> Server 3:<br>
>><br>
>> /var/lib/glusterd/vols/gv0/gv0.imagegluster1.data2-brick.vol:<br>
>> option shared-brick-count 0<br>
>> /var/lib/glusterd/vols/gv0/gv0.imagegluster1.data-brick.vol: option<br>
>> shared-brick-count 0<br>
>> /var/lib/glusterd/vols/gv0/gv0.imagegluster2.data2-brick.vol:<br>
>> option shared-brick-count 0<br>
>> /var/lib/glusterd/vols/gv0/gv0.imagegluster2.data-brick.vol: option<br>
>> shared-brick-count 0<br>
>> /var/lib/glusterd/vols/gv0/gv0.imagegluster3.data2-brick.vol:<br>
>> option shared-brick-count 2<br>
>> /var/lib/glusterd/vols/gv0/gv0.imagegluster3.data-brick.vol: option<br>
>> shared-brick-count 2<br>
>><br>
>> On Wed, May 27, 2020 at 1:36 PM Sanju Rakonde <<a href="mailto:srakonde@redhat.com" target="_blank">srakonde@redhat.com</a>> wrote:<br>
>> ><br>
>> > Hi Petr,<br>
>> ><br>
>> > what was the server version before upgrading to 7.2?<br>
>> ><br>
>> > Can you please share the shared-brick-count values from brick volfiles from all the nodes?<br>
>> > grep shared-brick-count /var/lib/glusterd/vols/<volume_name>/*<br>
>> ><br>
>> > On Wed, May 27, 2020 at 2:31 PM Petr Certik <<a href="mailto:petr@certik.cz" target="_blank">petr@certik.cz</a>> wrote:<br>
>> >><br>
>> >> Hi everyone,<br>
>> >><br>
>> >> we've been running a replicated volume for a while, with three ~1 TB<br>
>> >> bricks. Recently we've added three more same-sized bricks, making it a<br>
>> >> 2 x 3 distributed replicated volume. However, even after rebalance,<br>
>> >> the `df` command on a client shows the correct used/size percentage,<br>
>> >> but wrong absolute sizes. The size still shows up as ~1 TB while in<br>
>> >> reality it should be around 2 TB, and both "used" and "available"<br>
>> >> reported sizes are about half of what they should be. The clients were<br>
>> >> an old version (5.5), but even after upgrade to 7.2 and remount, the<br>
>> >> reported sizes are still wrong. There are no heal entries. What can I<br>
>> >> do to fix this?<br>
>> >><br>
>> >> OS: debian buster everywhere<br>
>> >> Server version: 7.3-1, opversion: 70200<br>
>> >> Client versions: 5.5-3, 7.6-1, opversions: 50400, 70200<br>
>> >><br>
>> >><br>
>> >> root@imagegluster1:~# gluster volume info gv0<br>
>> >> Volume Name: gv0<br>
>> >> Type: Distributed-Replicate<br>
>> >> Volume ID: 5505d350-9b61-4056-9054-de9dfb58eab7<br>
>> >> Status: Started<br>
>> >> Snapshot Count: 0<br>
>> >> Number of Bricks: 2 x 3 = 6<br>
>> >> Transport-type: tcp<br>
>> >> Bricks:<br>
>> >> Brick1: imagegluster1:/data/brick<br>
>> >> Brick2: imagegluster2:/data/brick<br>
>> >> Brick3: imagegluster3:/data/brick<br>
>> >> Brick4: imagegluster1:/data2/brick<br>
>> >> Brick5: imagegluster2:/data2/brick<br>
>> >> Brick6: imagegluster3:/data2/brick<br>
>> >> Options Reconfigured:<br>
>> >> features.cache-invalidation: on<br>
>> >> transport.address-family: inet<br>
>> >> storage.fips-mode-rchecksum: on<br>
>> >> nfs.disable: on<br>
>> >> performance.client-io-threads: off<br>
>> >><br>
>> >><br>
>> >> root@imagegluster1:~# df -h<br>
>> >> Filesystem Size Used Avail Use% Mounted on<br>
>> >> ...<br>
>> >> /dev/sdb1 894G 470G 425G 53% /data2<br>
>> >> /dev/sdc1 894G 469G 426G 53% /data<br>
>> >><br>
>> >><br>
>> >> root@any-of-the-clients:~# df -h<br>
>> >> Filesystem Size Used Avail Use% Mounted on<br>
>> >> ...<br>
>> >> imagegluster:/gv0 894G 478G 416G 54% /mnt/gluster<br>
>> >><br>
>> >><br>
>> >> Let me know if there's any other info I can provide about our setup.<br>
>> >><br>
>> >> Cheers,<br>
>> >> Petr Certik<br>
>> >> ________<br>
>> >><br>
>> >><br>
>> >><br>
>> >> Community Meeting Calendar:<br>
>> >><br>
>> >> Schedule -<br>
>> >> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC<br>
>> >> Bridge: <a href="https://bluejeans.com/441850968" rel="noreferrer" target="_blank">https://bluejeans.com/441850968</a><br>
>> >><br>
>> >> Gluster-users mailing list<br>
>> >> <a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a><br>
>> >> <a href="https://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">https://lists.gluster.org/mailman/listinfo/gluster-users</a><br>
>> >><br>
>> ><br>
>> ><br>
>> > --<br>
>> > Thanks,<br>
>> > Sanju<br>
>><br>
><br>
><br>
> --<br>
> Thanks,<br>
> Sanju<br>
<br>
</blockquote></div><br clear="all"><div><br></div>-- <br><div dir="ltr" class="gmail_signature"><div dir="ltr"><div>Thanks,<br></div>Sanju<br></div></div>