<div dir="ltr"><div dir="ltr"><div dir="ltr"><div>hi, nbalacha, </div><div>Sorry for my later reply.</div><div>I have tried to turn off cluster.readdir-optimize, it's of no help.</div><div>I create a similar volume with default option, and mv command seems ok, but when I enable quota, mv will lost file, the volume option is list as below:</div><div><div><br></div><div><br></div><div>Brick168: node33:/data13/bricks/test4</div><div>Options Reconfigured:</div><div>features.quota-deem-statfs: on</div><div>features.inode-quota: on</div><div>features.quota: on</div><div>transport.address-family: inet</div><div>nfs.disable: on</div><div>performance.client-io-threads: off</div></div><div><br></div><div><br></div><div>so I think it seems a bug of quota machanism, but I cant understand why.</div><div><br></div><div>Howerer, when i create a similar volume with only 2x2 bricks and enable quota, mv will lost file occasionally,if i dont make a mistake. without quota, everything seems ok.</div><div><br></div><div>Yours</div><div>Yu</div><div><br></div><br><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">Date: Wed, 5 Sep 2018 14:45:36 +0530<br>
From: Nithya Balachandran <<a href="mailto:nbalacha@redhat.com" target="_blank">nbalacha@redhat.com</a>><br>
To: yu sun <<a href="mailto:sunyu1949@gmail.com" target="_blank">sunyu1949@gmail.com</a>><br>
Cc: gluster-users <<a href="mailto:gluster-users@gluster.org" target="_blank">gluster-users@gluster.org</a>><br>
Subject: Re: [Gluster-users] mv lost some files ?<br>
Message-ID:<br>
<CAOUCJ=g9wd2zkcBd5=<a href="mailto:VBzt9-nJmPww6WJgjsm8p7xvPTTmqZLQ@mail.gmail.com" target="_blank">VBzt9-nJmPww6WJgjsm8p7xvPTTmqZLQ@mail.gmail.com</a>><br>
Content-Type: text/plain; charset="utf-8"<br>
<br>
On 5 September 2018 at 14:10, Nithya Balachandran <<a href="mailto:nbalacha@redhat.com" target="_blank">nbalacha@redhat.com</a>><br>
wrote:<br>
<br>
><br>
><br>
> On 5 September 2018 at 14:02, Nithya Balachandran <<a href="mailto:nbalacha@redhat.com" target="_blank">nbalacha@redhat.com</a>><br>
> wrote:<br>
><br>
>> Hi,<br>
>><br>
>> Please try turning off cluster.readdir-optimize and see it if helps.<br>
>><br>
><br>
> You can also try turning off parallel-readdir.<br>
><br>
<br>
<br>
Please ignore this - it is not enabled on your volume.<br>
<br>
><br>
><br>
>> If not, please send us the client mount logs and a tcpdump of when the<br>
>> *ls* is performed from the client. Use the following to capture the<br>
>> dump:<br>
>><br>
>> tcpdump -i any -s 0 -w /var/tmp/dirls.pcap tcp and not port 22<br>
>><br>
>><br>
>><br>
>> Thanks,<br>
>> Nithya<br>
>><br>
>><br>
>>><br>
>>> Raghavendra Gowdappa <<a href="mailto:rgowdapp@redhat.com" target="_blank">rgowdapp@redhat.com</a>> ?2018?9?5??? ??12:40???<br>
>>><br>
>>>><br>
>>>><br>
>>>><br>
>>>>><br>
>>>>>><br>
>>>>>> On Tue, Sep 4, 2018 at 5:28 PM, yu sun <<a href="mailto:sunyu1949@gmail.com" target="_blank">sunyu1949@gmail.com</a>> wrote:<br>
>>>>>><br>
>>>>>>> Hi all:<br>
>>>>>>><br>
>>>>>>> I have a replicated volume project2 with info:<br>
>>>>>>> Volume Name: project2 Type: Distributed-Replicate Volume ID:<br>
>>>>>>> 60175b8e-de0e-4409-81ae-7bb5eb5cacbf Status: Started Snapshot<br>
>>>>>>> Count: 0 Number of Bricks: 84 x 2 = 168 Transport-type: tcp Bricks: Brick1:<br>
>>>>>>> node20:/data2/bricks/project2 Brick2: node21:/data2/bricks/project2 Brick3:<br>
>>>>>>> node22:/data2/bricks/project2 Brick4: node23:/data2/bricks/project2 Brick5:<br>
>>>>>>> node24:/data2/bricks/project2 Brick6: node25:/data2/bricks/project2 Brick7:<br>
>>>>>>> node26:/data2/bricks/project2 Brick8: node27:/data2/bricks/project2 Brick9:<br>
>>>>>>> node28:/data2/bricks/project2 Brick10: node29:/data2/bricks/project2<br>
>>>>>>> Brick11: node30:/data2/bricks/project2 Brick12:<br>
>>>>>>> node31:/data2/bricks/project2 Brick13: node32:/data2/bricks/project2<br>
>>>>>>> Brick14: node33:/data2/bricks/project2 Brick15:<br>
>>>>>>> node20:/data3/bricks/project2 Brick16: node21:/data3/bricks/project2<br>
>>>>>>> Brick17: node22:/data3/bricks/project2 Brick18:<br>
>>>>>>> node23:/data3/bricks/project2 Brick19: node24:/data3/bricks/project2<br>
>>>>>>> Brick20: node25:/data3/bricks/project2 Brick21:<br>
>>>>>>> node26:/data3/bricks/project2 Brick22: node27:/data3/bricks/project2<br>
>>>>>>> Brick23: node28:/data3/bricks/project2 Brick24:<br>
>>>>>>> node29:/data3/bricks/project2 Brick25: node30:/data3/bricks/project2<br>
>>>>>>> Brick26: node31:/data3/bricks/project2 Brick27:<br>
>>>>>>> node32:/data3/bricks/project2 Brick28: node33:/data3/bricks/project2<br>
>>>>>>> Brick29: node20:/data4/bricks/project2 Brick30:<br>
>>>>>>> node21:/data4/bricks/project2 Brick31: node22:/data4/bricks/project2<br>
>>>>>>> Brick32: node23:/data4/bricks/project2 Brick33:<br>
>>>>>>> node24:/data4/bricks/project2 Brick34: node25:/data4/bricks/project2<br>
>>>>>>> Brick35: node26:/data4/bricks/project2 Brick36:<br>
>>>>>>> node27:/data4/bricks/project2 Brick37: node28:/data4/bricks/project2<br>
>>>>>>> Brick38: node29:/data4/bricks/project2 Brick39:<br>
>>>>>>> node30:/data4/bricks/project2 Brick40: node31:/data4/bricks/project2<br>
>>>>>>> Brick41: node32:/data4/bricks/project2 Brick42:<br>
>>>>>>> node33:/data4/bricks/project2 Brick43: node20:/data5/bricks/project2<br>
>>>>>>> Brick44: node21:/data5/bricks/project2 Brick45:<br>
>>>>>>> node22:/data5/bricks/project2 Brick46: node23:/data5/bricks/project2<br>
>>>>>>> Brick47: node24:/data5/bricks/project2 Brick48:<br>
>>>>>>> node25:/data5/bricks/project2 Brick49: node26:/data5/bricks/project2<br>
>>>>>>> Brick50: node27:/data5/bricks/project2 Brick51:<br>
>>>>>>> node28:/data5/bricks/project2 Brick52: node29:/data5/bricks/project2<br>
>>>>>>> Brick53: node30:/data5/bricks/project2 Brick54:<br>
>>>>>>> node31:/data5/bricks/project2 Brick55: node32:/data5/bricks/project2<br>
>>>>>>> Brick56: node33:/data5/bricks/project2 Brick57:<br>
>>>>>>> node20:/data6/bricks/project2 Brick58: node21:/data6/bricks/project2<br>
>>>>>>> Brick59: node22:/data6/bricks/project2 Brick60:<br>
>>>>>>> node23:/data6/bricks/project2 Brick61: node24:/data6/bricks/project2<br>
>>>>>>> Brick62: node25:/data6/bricks/project2 Brick63:<br>
>>>>>>> node26:/data6/bricks/project2 Brick64: node27:/data6/bricks/project2<br>
>>>>>>> Brick65: node28:/data6/bricks/project2 Brick66:<br>
>>>>>>> node29:/data6/bricks/project2 Brick67: node30:/data6/bricks/project2<br>
>>>>>>> Brick68: node31:/data6/bricks/project2 Brick69:<br>
>>>>>>> node32:/data6/bricks/project2 Brick70: node33:/data6/bricks/project2<br>
>>>>>>> Brick71: node20:/data7/bricks/project2 Brick72:<br>
>>>>>>> node21:/data7/bricks/project2 Brick73: node22:/data7/bricks/project2<br>
>>>>>>> Brick74: node23:/data7/bricks/project2 Brick75:<br>
>>>>>>> node24:/data7/bricks/project2 Brick76: node25:/data7/bricks/project2<br>
>>>>>>> Brick77: node26:/data7/bricks/project2 Brick78:<br>
>>>>>>> node27:/data7/bricks/project2 Brick79: node28:/data7/bricks/project2<br>
>>>>>>> Brick80: node29:/data7/bricks/project2 Brick81:<br>
>>>>>>> node30:/data7/bricks/project2 Brick82: node31:/data7/bricks/project2<br>
>>>>>>> Brick83: node32:/data7/bricks/project2 Brick84:<br>
>>>>>>> node33:/data7/bricks/project2 Brick85: node20:/data8/bricks/project2<br>
>>>>>>> Brick86: node21:/data8/bricks/project2 Brick87:<br>
>>>>>>> node22:/data8/bricks/project2 Brick88: node23:/data8/bricks/project2<br>
>>>>>>> Brick89: node24:/data8/bricks/project2 Brick90:<br>
>>>>>>> node25:/data8/bricks/project2 Brick91: node26:/data8/bricks/project2<br>
>>>>>>> Brick92: node27:/data8/bricks/project2 Brick93:<br>
>>>>>>> node28:/data8/bricks/project2 Brick94: node29:/data8/bricks/project2<br>
>>>>>>> Brick95: node30:/data8/bricks/project2 Brick96:<br>
>>>>>>> node31:/data8/bricks/project2 Brick97: node32:/data8/bricks/project2<br>
>>>>>>> Brick98: node33:/data8/bricks/project2 Brick99:<br>
>>>>>>> node20:/data9/bricks/project2 Brick100: node21:/data9/bricks/project2<br>
>>>>>>> Brick101: node22:/data9/bricks/project2 Brick102:<br>
>>>>>>> node23:/data9/bricks/project2 Brick103: node24:/data9/bricks/project2<br>
>>>>>>> Brick104: node25:/data9/bricks/project2 Brick105:<br>
>>>>>>> node26:/data9/bricks/project2 Brick106: node27:/data9/bricks/project2<br>
>>>>>>> Brick107: node28:/data9/bricks/project2 Brick108:<br>
>>>>>>> node29:/data9/bricks/project2 Brick109: node30:/data9/bricks/project2<br>
>>>>>>> Brick110: node31:/data9/bricks/project2 Brick111:<br>
>>>>>>> node32:/data9/bricks/project2 Brick112: node33:/data9/bricks/project2<br>
>>>>>>> Brick113: node20:/data10/bricks/project2 Brick114:<br>
>>>>>>> node21:/data10/bricks/project2 Brick115: node22:/data10/bricks/project2<br>
>>>>>>> Brick116: node23:/data10/bricks/project2 Brick117:<br>
>>>>>>> node24:/data10/bricks/project2 Brick118: node25:/data10/bricks/project2<br>
>>>>>>> Brick119: node26:/data10/bricks/project2 Brick120:<br>
>>>>>>> node27:/data10/bricks/project2 Brick121: node28:/data10/bricks/project2<br>
>>>>>>> Brick122: node29:/data10/bricks/project2 Brick123:<br>
>>>>>>> node30:/data10/bricks/project2 Brick124: node31:/data10/bricks/project2<br>
>>>>>>> Brick125: node32:/data10/bricks/project2 Brick126:<br>
>>>>>>> node33:/data10/bricks/project2 Brick127: node20:/data11/bricks/project2<br>
>>>>>>> Brick128: node21:/data11/bricks/project2 Brick129:<br>
>>>>>>> node22:/data11/bricks/project2 Brick130: node23:/data11/bricks/project2<br>
>>>>>>> Brick131: node24:/data11/bricks/project2 Brick132:<br>
>>>>>>> node25:/data11/bricks/project2 Brick133: node26:/data11/bricks/project2<br>
>>>>>>> Brick134: node27:/data11/bricks/project2 Brick135:<br>
>>>>>>> node28:/data11/bricks/project2 Brick136: node29:/data11/bricks/project2<br>
>>>>>>> Brick137: node30:/data11/bricks/project2 Brick138:<br>
>>>>>>> node31:/data11/bricks/project2 Brick139: node32:/data11/bricks/project2<br>
>>>>>>> Brick140: node33:/data11/bricks/project2 Brick141:<br>
>>>>>>> node20:/data12/bricks/project2 Brick142: node21:/data12/bricks/project2<br>
>>>>>>> Brick143: node22:/data12/bricks/project2 Brick144:<br>
>>>>>>> node23:/data12/bricks/project2 Brick145: node24:/data12/bricks/project2<br>
>>>>>>> Brick146: node25:/data12/bricks/project2 Brick147:<br>
>>>>>>> node26:/data12/bricks/project2 Brick148: node27:/data12/bricks/project2<br>
>>>>>>> Brick149: node28:/data12/bricks/project2 Brick150:<br>
>>>>>>> node29:/data12/bricks/project2 Brick151: node30:/data12/bricks/project2<br>
>>>>>>> Brick152: node31:/data12/bricks/project2 Brick153:<br>
>>>>>>> node32:/data12/bricks/project2 Brick154: node33:/data12/bricks/project2<br>
>>>>>>> Brick155: node20:/data13/bricks/project2 Brick156:<br>
>>>>>>> node21:/data13/bricks/project2 Brick157: node22:/data13/bricks/project2<br>
>>>>>>> Brick158: node23:/data13/bricks/project2 Brick159:<br>
>>>>>>> node24:/data13/bricks/project2 Brick160: node25:/data13/bricks/project2<br>
>>>>>>> Brick161: node26:/data13/bricks/project2 Brick162:<br>
>>>>>>> node27:/data13/bricks/project2 Brick163: node28:/data13/bricks/project2<br>
>>>>>>> Brick164: node29:/data13/bricks/project2 Brick165:<br>
>>>>>>> node30:/data13/bricks/project2 Brick166: node31:/data13/bricks/project2<br>
>>>>>>> Brick167: node32:/data13/bricks/project2 Brick168:<br>
>>>>>>> node33:/data13/bricks/project2 Options Reconfigured:<br>
>>>>>>> performance.force-readdirp: on performance.write-behind: off<br>
>>>>>>> performance.stat-prefetch: on performance.client-io-threads: on<br>
>>>>>>> nfs.disable: on transport.address-family: inet features.quota: on<br>
>>>>>>> features.inode-quota: on features.quota-deem-statfs: on<br>
>>>>>>> cluster.readdir-optimize: on cluster.lookup-optimize: on<br>
>>>>>>> dht.force-readdirp: off client.event-threads: 10 server.event-threads: 10<br>
>>>>>>> performance.readdir-ahead: on performance.io-cache: on<br>
>>>>>>> performance.flush-behind: on performance.cache-size: 5GB<br>
>>>>>>> performance.cache-max-file-size: 1MB performance.write-behind-window-size:<br>
>>>>>>> 10MB performance.read-ahead: off network.remote-dio: enable<br>
>>>>>>> performance.strict-o-direct: disable performance.io-thread-count: 25<br>
>>>>>>><br>
>>>>>>><br>
>>>>>>> the volume looks ok, and I mount this volume on my client machine:<br>
>>>>>>> mount -t glusterfs -o oom-score-adj=-999 -o direct-io-mode=disable<br>
>>>>>>> -o use-readdirp=no node20:/project2 /mnt/project2<br>
>>>>>>><br>
>>>>>>> I have a directory in /mnt/project2/, but when I mv the directory to<br>
>>>>>>> other dirs, files in the dir lost while tree or ls the dir, some files<br>
>>>>>>> missing, my operations is list as below:<br>
>>>>>>><br>
>>>>>><br>
>>>>>> Looks very similar to:<br>
>>>>>> <a href="https://bugzilla.redhat.com/show_bug.cgi?id=1118762" rel="noreferrer" target="_blank">https://bugzilla.redhat.com/show_bug.cgi?id=1118762</a><br>
>>>>>> <a href="https://bugzilla.redhat.com/show_bug.cgi?id=1337394" rel="noreferrer" target="_blank">https://bugzilla.redhat.com/show_bug.cgi?id=1337394</a><br>
>>>>>><br>
>>>>>><br>
>>>>>><br>
>>>>>>> root@ml-gpu-ser129.nmg01:/mnt/project2/371_37829$ mkdir test-dir<br>
>>>>>>> root@ml-gpu-ser129.nmg01:/mnt/project2/371_37829$ tree<br>
>>>>>>> .<br>
>>>>>>> ??? face_landmarks<br>
>>>>>>> ? ??? alive<br>
>>>>>>> ? ??? logs_20180823_28<br>
>>>>>>> ? ??? info_000000.out<br>
>>>>>>> ? ??? info_000001.out<br>
>>>>>>> ? ??? info_000002.out<br>
>>>>>>> ? ??? info_000003.out<br>
>>>>>>> ? ??? info_000004.out<br>
>>>>>>> ? ??? info_000005.out<br>
>>>>>>> ? ??? info_000006.out<br>
>>>>>>> ? ??? info_000007.out<br>
>>>>>>> ? ??? info_000008.out<br>
>>>>>>> ? ??? info_000009.out<br>
>>>>>>> ? ??? info_000010.out<br>
>>>>>>> ? ??? info_000011.out<br>
>>>>>>> ? ??? info_000012.out<br>
>>>>>>> ? ??? info_000013.out<br>
>>>>>>> ? ??? info_000014.out<br>
>>>>>>> ? ??? info_000015.out<br>
>>>>>>> ? ??? info_000016.out<br>
>>>>>>> ? ??? info_000017.out<br>
>>>>>>> ? ??? info_000018.out<br>
>>>>>>> ? ??? info_000019.out<br>
>>>>>>> ??? test-dir<br>
>>>>>>><br>
>>>>>>> 4 directories, 20 files<br>
>>>>>>> root@ml-gpu-ser129.nmg01:/mnt/project2/371_37829$ mv<br>
>>>>>>> face_landmarks/ test-dir/<br>
>>>>>>> root@ml-gpu-ser129.nmg01:/mnt/project2/371_37829$ tree<br>
>>>>>>> .<br>
>>>>>>> ??? test-dir<br>
>>>>>>> ??? face_landmarks<br>
>>>>>>><br>
>>>>>>> 2 directories, 0 files<br>
>>>>>>> root@ml-gpu-ser129.nmg01:/mnt/project2/371_37829$ cd<br>
>>>>>>> test-dir/face_landmarks/<br>
>>>>>>> root@ml-gpu-ser129.nmg01:/mnt/project2/371_37829/test-dir/face_landmarks$<br>
>>>>>>> ls<br>
>>>>>>> root@ml-gpu-ser129.nmg01:/mnt/project2/371_37829/test-dir/face_landmarks$<br>
>>>>>>> cd ..<br>
>>>>>>> root@ml-gpu-ser129.nmg01:/mnt/project2/371_37829/test-dir$ mv<br>
>>>>>>> face_landmarks/ ..<br>
>>>>>>> root@ml-gpu-ser129.nmg01:/mnt/project2/371_37829/test-dir$ cd ..<br>
>>>>>>> root@ml-gpu-ser129.nmg01:/mnt/project2/371_37829$ tree<br>
>>>>>>> .<br>
>>>>>>> ??? face_landmarks<br>
>>>>>>> ? ??? alive<br>
>>>>>>> ??? test-dir<br>
>>>>>>><br>
>>>>>>> 3 directories, 0 files<br>
>>>>>>> root@ml-gpu-ser129.nmg01:/mnt/project2/371_37829$<br>
>>>>>>><br>
>>>>>>> I think i make some mistakes with volume option, buti i am not sure,<br>
>>>>>>> so how can i find the lost files? the files seems still int the directory,<br>
>>>>>>> because i cant remove the directory and rm tell me "Not empty directory"<br>
>>>>>>><br>
>>>>>><br>
>>>>>> Its likely that src and dst of mv having same gfid and that's causing<br>
>>>>>> the issues. Can you look into both src and dst paths on all bricks? Union<br>
>>>>>> of contents of both directories should give all the files in the src<br>
>>>>>> directory before mv. Once found you can,<br>
>>>>>> * keep a backup of contents of src and dst on all bricks<br>
>>>>>> * remove trusted.gfid xattr on src and dst from all bricks<br>
>>>>>> * remove gfid handle (.glusterfs/<first two characters of<br>
>>>>>> gfid>/<second set of two characters of gfid>/<gfid> on each brick)<br>
>>>>>> * disable readdirplus in entire stack (maybe you can use a tmp mount<br>
>>>>>> for this) [1]<br>
>>>>>> * stat src and dst on a mount point with readdirplus disabled.<br>
>>>>>> * Now you'll see two directories src and dst on mountpoint. You can<br>
>>>>>> copy the contents of both into a new directory<br>
>>>>>><br>
>>>>>> [1] <a href="https://lists.gluster.org/pipermail/gluster-users/2017-March" rel="noreferrer" target="_blank">https://lists.gluster.org/pipermail/gluster-users/2017-March</a><br>
>>>>>> /030148.html<br>
>>>>>><br>
>>>>>><br>
>>>>>>><br>
>>>>>>> Any suggestions is appreciated.<br>
>>>>>>> Many Thanks<br>
>>>>>>><br>
>>>>>>> Best regards<br>
>>>>>>> Yu<br>
>>>>>>><br>
>>>>>>> _______________________________________________<br>
>>>>>>> Gluster-users mailing list<br>
>>>>>>> <a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a><br>
>>>>>>> <a href="https://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">https://lists.gluster.org/mailman/listinfo/gluster-users</a><br>
>>>>>>><br>
>>>>>><br>
>>>>>><br>
>>>><br>
>>> _______________________________________________<br>
>>> Gluster-users mailing list<br>
>>> <a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a><br>
>>> <a href="https://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">https://lists.gluster.org/mailman/listinfo/gluster-users</a><br>
>>><br>
>><br>
>><br>
><br>
-------------- next part --------------<br>
An HTML attachment was scrubbed...<br>
URL: <<a href="http://lists.gluster.org/pipermail/gluster-users/attachments/20180905/b869cf39/attachment-0001.html" rel="noreferrer" target="_blank">http://lists.gluster.org/pipermail/gluster-users/attachments/20180905/b869cf39/attachment-0001.html</a>><br>
<br>
------------------------------<br>
<br>
Message: 2<br>
Date: Wed, 5 Sep 2018 10:21:51 +0000<br>
From: Krishna Verma <<a href="mailto:kverma@cadence.com" target="_blank">kverma@cadence.com</a>><br>
To: Gluster Users <<a href="mailto:gluster-users@gluster.org" target="_blank">gluster-users@gluster.org</a>><br>
Subject: [Gluster-users] Master-Slave-Replicated Volume size is<br>
different<br>
Message-ID:<br>
<<a href="mailto:BY2PR07MB9683D3DB74BD7A1F3AAA04CD8020@BY2PR07MB968.namprd07.prod.outlook.com" target="_blank">BY2PR07MB9683D3DB74BD7A1F3AAA04CD8020@BY2PR07MB968.namprd07.prod.outlook.com</a>><br>
<br>
Content-Type: text/plain; charset="us-ascii"<br>
<br>
Hi All,<br>
<br>
I have a geo replication setup of 2*1 with replicated volume.<br>
<br>
The replicated volume is showing difference in "used size" between master and slave.<br>
<br>
At master "/repvol" showing only 2% use.<br>
<br>
[root@noi-foreman02 repvol]# df -hT /repvol<br>
Filesystem Type Size Used Avail Use% Mounted on<br>
gluster-poc-noida:/glusterep fuse.glusterfs 30G 402M 28G 2% /repvol<br>
[root@noi-foreman02 repvol]#<br>
<br>
Whereas at Slave it shows 71% use.<br>
<br>
[root@sj-kverma repvol]# df -hT /repvol<br>
Filesystem Type Size Used Avail Use% Mounted on<br>
gluster-poc-sj:glusterep fuse.glusterfs 30G 20G 8.2G 71% /repvol<br>
You have new mail in /var/spool/mail/root<br>
[root@sj-kverma repvol]#<br>
<br>
Could anyone please help to replicated it correctly with actual size. There is no data at master and slave.<br>
<br>
/Krishna<br>
-------------- next part --------------<br>
An HTML attachment was scrubbed...<br>
URL: <<a href="http://lists.gluster.org/pipermail/gluster-users/attachments/20180905/2ab493bd/attachment-0001.html" rel="noreferrer" target="_blank">http://lists.gluster.org/pipermail/gluster-users/attachments/20180905/2ab493bd/attachment-0001.html</a>><br>
<br>
------------------------------<br>
<br>
_______________________________________________<br>
Gluster-users mailing list<br>
<a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a><br>
<a href="https://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">https://lists.gluster.org/mailman/listinfo/gluster-users</a><br>
<br>
End of Gluster-users Digest, Vol 125, Issue 8<br>
*********************************************<br>
</blockquote></div></div></div></div>