[Gluster-users] mv lost some files ?

yu sun sunyu1949 at gmail.com
Wed Sep 5 13:09:05 UTC 2018


hi, nbalacha,
Sorry for my later reply.
I have tried to turn off cluster.readdir-optimize, it's of no help.
I create a similar volume with default option, and mv command seems ok, but
when I enable quota,  mv will lost file, the volume option is list as below:


Brick168: node33:/data13/bricks/test4
Options Reconfigured:
features.quota-deem-statfs: on
features.inode-quota: on
features.quota: on
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: off


so I think it seems a bug of quota machanism, but I cant understand why.

Howerer,  when i create a similar volume with only 2x2 bricks and enable
quota, mv will lost file occasionally,if i dont make a mistake. without
quota, everything seems ok.

Yours
Yu


Date: Wed, 5 Sep 2018 14:45:36 +0530
> From: Nithya Balachandran <nbalacha at redhat.com>
> To: yu sun <sunyu1949 at gmail.com>
> Cc: gluster-users <gluster-users at gluster.org>
> Subject: Re: [Gluster-users] mv lost some files ?
> Message-ID:
>         <CAOUCJ=g9wd2zkcBd5=
> VBzt9-nJmPww6WJgjsm8p7xvPTTmqZLQ at mail.gmail.com>
> Content-Type: text/plain; charset="utf-8"
>
> On 5 September 2018 at 14:10, Nithya Balachandran <nbalacha at redhat.com>
> wrote:
>
> >
> >
> > On 5 September 2018 at 14:02, Nithya Balachandran <nbalacha at redhat.com>
> > wrote:
> >
> >> Hi,
> >>
> >> Please try turning off cluster.readdir-optimize and see it if helps.
> >>
> >
> > You can also try turning off parallel-readdir.
> >
>
>
> Please ignore this - it is not enabled on your volume.
>
> >
> >
> >> If not, please send us the client mount logs and a tcpdump of when the
> >> *ls* is performed from the client.  Use the following to capture the
> >> dump:
> >>
> >> tcpdump -i any -s 0 -w /var/tmp/dirls.pcap tcp and not port 22
> >>
> >>
> >>
> >> Thanks,
> >> Nithya
> >>
> >>
> >>>
> >>> Raghavendra Gowdappa <rgowdapp at redhat.com> ?2018?9?5??? ??12:40???
> >>>
> >>>>
> >>>>
> >>>>
> >>>>>
> >>>>>>
> >>>>>> On Tue, Sep 4, 2018 at 5:28 PM, yu sun <sunyu1949 at gmail.com> wrote:
> >>>>>>
> >>>>>>> Hi all:
> >>>>>>>
> >>>>>>> I have a replicated volume project2 with info:
> >>>>>>> Volume Name: project2 Type: Distributed-Replicate Volume ID:
> >>>>>>> 60175b8e-de0e-4409-81ae-7bb5eb5cacbf Status: Started Snapshot
> >>>>>>> Count: 0 Number of Bricks: 84 x 2 = 168 Transport-type: tcp
> Bricks: Brick1:
> >>>>>>> node20:/data2/bricks/project2 Brick2:
> node21:/data2/bricks/project2 Brick3:
> >>>>>>> node22:/data2/bricks/project2 Brick4:
> node23:/data2/bricks/project2 Brick5:
> >>>>>>> node24:/data2/bricks/project2 Brick6:
> node25:/data2/bricks/project2 Brick7:
> >>>>>>> node26:/data2/bricks/project2 Brick8:
> node27:/data2/bricks/project2 Brick9:
> >>>>>>> node28:/data2/bricks/project2 Brick10:
> node29:/data2/bricks/project2
> >>>>>>> Brick11: node30:/data2/bricks/project2 Brick12:
> >>>>>>> node31:/data2/bricks/project2 Brick13:
> node32:/data2/bricks/project2
> >>>>>>> Brick14: node33:/data2/bricks/project2 Brick15:
> >>>>>>> node20:/data3/bricks/project2 Brick16:
> node21:/data3/bricks/project2
> >>>>>>> Brick17: node22:/data3/bricks/project2 Brick18:
> >>>>>>> node23:/data3/bricks/project2 Brick19:
> node24:/data3/bricks/project2
> >>>>>>> Brick20: node25:/data3/bricks/project2 Brick21:
> >>>>>>> node26:/data3/bricks/project2 Brick22:
> node27:/data3/bricks/project2
> >>>>>>> Brick23: node28:/data3/bricks/project2 Brick24:
> >>>>>>> node29:/data3/bricks/project2 Brick25:
> node30:/data3/bricks/project2
> >>>>>>> Brick26: node31:/data3/bricks/project2 Brick27:
> >>>>>>> node32:/data3/bricks/project2 Brick28:
> node33:/data3/bricks/project2
> >>>>>>> Brick29: node20:/data4/bricks/project2 Brick30:
> >>>>>>> node21:/data4/bricks/project2 Brick31:
> node22:/data4/bricks/project2
> >>>>>>> Brick32: node23:/data4/bricks/project2 Brick33:
> >>>>>>> node24:/data4/bricks/project2 Brick34:
> node25:/data4/bricks/project2
> >>>>>>> Brick35: node26:/data4/bricks/project2 Brick36:
> >>>>>>> node27:/data4/bricks/project2 Brick37:
> node28:/data4/bricks/project2
> >>>>>>> Brick38: node29:/data4/bricks/project2 Brick39:
> >>>>>>> node30:/data4/bricks/project2 Brick40:
> node31:/data4/bricks/project2
> >>>>>>> Brick41: node32:/data4/bricks/project2 Brick42:
> >>>>>>> node33:/data4/bricks/project2 Brick43:
> node20:/data5/bricks/project2
> >>>>>>> Brick44: node21:/data5/bricks/project2 Brick45:
> >>>>>>> node22:/data5/bricks/project2 Brick46:
> node23:/data5/bricks/project2
> >>>>>>> Brick47: node24:/data5/bricks/project2 Brick48:
> >>>>>>> node25:/data5/bricks/project2 Brick49:
> node26:/data5/bricks/project2
> >>>>>>> Brick50: node27:/data5/bricks/project2 Brick51:
> >>>>>>> node28:/data5/bricks/project2 Brick52:
> node29:/data5/bricks/project2
> >>>>>>> Brick53: node30:/data5/bricks/project2 Brick54:
> >>>>>>> node31:/data5/bricks/project2 Brick55:
> node32:/data5/bricks/project2
> >>>>>>> Brick56: node33:/data5/bricks/project2 Brick57:
> >>>>>>> node20:/data6/bricks/project2 Brick58:
> node21:/data6/bricks/project2
> >>>>>>> Brick59: node22:/data6/bricks/project2 Brick60:
> >>>>>>> node23:/data6/bricks/project2 Brick61:
> node24:/data6/bricks/project2
> >>>>>>> Brick62: node25:/data6/bricks/project2 Brick63:
> >>>>>>> node26:/data6/bricks/project2 Brick64:
> node27:/data6/bricks/project2
> >>>>>>> Brick65: node28:/data6/bricks/project2 Brick66:
> >>>>>>> node29:/data6/bricks/project2 Brick67:
> node30:/data6/bricks/project2
> >>>>>>> Brick68: node31:/data6/bricks/project2 Brick69:
> >>>>>>> node32:/data6/bricks/project2 Brick70:
> node33:/data6/bricks/project2
> >>>>>>> Brick71: node20:/data7/bricks/project2 Brick72:
> >>>>>>> node21:/data7/bricks/project2 Brick73:
> node22:/data7/bricks/project2
> >>>>>>> Brick74: node23:/data7/bricks/project2 Brick75:
> >>>>>>> node24:/data7/bricks/project2 Brick76:
> node25:/data7/bricks/project2
> >>>>>>> Brick77: node26:/data7/bricks/project2 Brick78:
> >>>>>>> node27:/data7/bricks/project2 Brick79:
> node28:/data7/bricks/project2
> >>>>>>> Brick80: node29:/data7/bricks/project2 Brick81:
> >>>>>>> node30:/data7/bricks/project2 Brick82:
> node31:/data7/bricks/project2
> >>>>>>> Brick83: node32:/data7/bricks/project2 Brick84:
> >>>>>>> node33:/data7/bricks/project2 Brick85:
> node20:/data8/bricks/project2
> >>>>>>> Brick86: node21:/data8/bricks/project2 Brick87:
> >>>>>>> node22:/data8/bricks/project2 Brick88:
> node23:/data8/bricks/project2
> >>>>>>> Brick89: node24:/data8/bricks/project2 Brick90:
> >>>>>>> node25:/data8/bricks/project2 Brick91:
> node26:/data8/bricks/project2
> >>>>>>> Brick92: node27:/data8/bricks/project2 Brick93:
> >>>>>>> node28:/data8/bricks/project2 Brick94:
> node29:/data8/bricks/project2
> >>>>>>> Brick95: node30:/data8/bricks/project2 Brick96:
> >>>>>>> node31:/data8/bricks/project2 Brick97:
> node32:/data8/bricks/project2
> >>>>>>> Brick98: node33:/data8/bricks/project2 Brick99:
> >>>>>>> node20:/data9/bricks/project2 Brick100:
> node21:/data9/bricks/project2
> >>>>>>> Brick101: node22:/data9/bricks/project2 Brick102:
> >>>>>>> node23:/data9/bricks/project2 Brick103:
> node24:/data9/bricks/project2
> >>>>>>> Brick104: node25:/data9/bricks/project2 Brick105:
> >>>>>>> node26:/data9/bricks/project2 Brick106:
> node27:/data9/bricks/project2
> >>>>>>> Brick107: node28:/data9/bricks/project2 Brick108:
> >>>>>>> node29:/data9/bricks/project2 Brick109:
> node30:/data9/bricks/project2
> >>>>>>> Brick110: node31:/data9/bricks/project2 Brick111:
> >>>>>>> node32:/data9/bricks/project2 Brick112:
> node33:/data9/bricks/project2
> >>>>>>> Brick113: node20:/data10/bricks/project2 Brick114:
> >>>>>>> node21:/data10/bricks/project2 Brick115:
> node22:/data10/bricks/project2
> >>>>>>> Brick116: node23:/data10/bricks/project2 Brick117:
> >>>>>>> node24:/data10/bricks/project2 Brick118:
> node25:/data10/bricks/project2
> >>>>>>> Brick119: node26:/data10/bricks/project2 Brick120:
> >>>>>>> node27:/data10/bricks/project2 Brick121:
> node28:/data10/bricks/project2
> >>>>>>> Brick122: node29:/data10/bricks/project2 Brick123:
> >>>>>>> node30:/data10/bricks/project2 Brick124:
> node31:/data10/bricks/project2
> >>>>>>> Brick125: node32:/data10/bricks/project2 Brick126:
> >>>>>>> node33:/data10/bricks/project2 Brick127:
> node20:/data11/bricks/project2
> >>>>>>> Brick128: node21:/data11/bricks/project2 Brick129:
> >>>>>>> node22:/data11/bricks/project2 Brick130:
> node23:/data11/bricks/project2
> >>>>>>> Brick131: node24:/data11/bricks/project2 Brick132:
> >>>>>>> node25:/data11/bricks/project2 Brick133:
> node26:/data11/bricks/project2
> >>>>>>> Brick134: node27:/data11/bricks/project2 Brick135:
> >>>>>>> node28:/data11/bricks/project2 Brick136:
> node29:/data11/bricks/project2
> >>>>>>> Brick137: node30:/data11/bricks/project2 Brick138:
> >>>>>>> node31:/data11/bricks/project2 Brick139:
> node32:/data11/bricks/project2
> >>>>>>> Brick140: node33:/data11/bricks/project2 Brick141:
> >>>>>>> node20:/data12/bricks/project2 Brick142:
> node21:/data12/bricks/project2
> >>>>>>> Brick143: node22:/data12/bricks/project2 Brick144:
> >>>>>>> node23:/data12/bricks/project2 Brick145:
> node24:/data12/bricks/project2
> >>>>>>> Brick146: node25:/data12/bricks/project2 Brick147:
> >>>>>>> node26:/data12/bricks/project2 Brick148:
> node27:/data12/bricks/project2
> >>>>>>> Brick149: node28:/data12/bricks/project2 Brick150:
> >>>>>>> node29:/data12/bricks/project2 Brick151:
> node30:/data12/bricks/project2
> >>>>>>> Brick152: node31:/data12/bricks/project2 Brick153:
> >>>>>>> node32:/data12/bricks/project2 Brick154:
> node33:/data12/bricks/project2
> >>>>>>> Brick155: node20:/data13/bricks/project2 Brick156:
> >>>>>>> node21:/data13/bricks/project2 Brick157:
> node22:/data13/bricks/project2
> >>>>>>> Brick158: node23:/data13/bricks/project2 Brick159:
> >>>>>>> node24:/data13/bricks/project2 Brick160:
> node25:/data13/bricks/project2
> >>>>>>> Brick161: node26:/data13/bricks/project2 Brick162:
> >>>>>>> node27:/data13/bricks/project2 Brick163:
> node28:/data13/bricks/project2
> >>>>>>> Brick164: node29:/data13/bricks/project2 Brick165:
> >>>>>>> node30:/data13/bricks/project2 Brick166:
> node31:/data13/bricks/project2
> >>>>>>> Brick167: node32:/data13/bricks/project2 Brick168:
> >>>>>>> node33:/data13/bricks/project2 Options Reconfigured:
> >>>>>>> performance.force-readdirp: on performance.write-behind: off
> >>>>>>> performance.stat-prefetch: on performance.client-io-threads: on
> >>>>>>> nfs.disable: on transport.address-family: inet features.quota: on
> >>>>>>> features.inode-quota: on features.quota-deem-statfs: on
> >>>>>>> cluster.readdir-optimize: on cluster.lookup-optimize: on
> >>>>>>> dht.force-readdirp: off client.event-threads: 10
> server.event-threads: 10
> >>>>>>> performance.readdir-ahead: on performance.io-cache: on
> >>>>>>> performance.flush-behind: on performance.cache-size: 5GB
> >>>>>>> performance.cache-max-file-size: 1MB
> performance.write-behind-window-size:
> >>>>>>> 10MB performance.read-ahead: off network.remote-dio: enable
> >>>>>>> performance.strict-o-direct: disable performance.io-thread-count:
> 25
> >>>>>>>
> >>>>>>>
> >>>>>>> the volume looks ok, and I mount this volume on my client machine:
> >>>>>>> mount -t glusterfs -o oom-score-adj=-999 -o direct-io-mode=disable
> >>>>>>> -o use-readdirp=no node20:/project2 /mnt/project2
> >>>>>>>
> >>>>>>> I have a directory in /mnt/project2/, but when I mv the directory
> to
> >>>>>>> other dirs, files in the dir lost while tree or ls the dir, some
> files
> >>>>>>> missing, my operations is list as below:
> >>>>>>>
> >>>>>>
> >>>>>> Looks very similar to:
> >>>>>> https://bugzilla.redhat.com/show_bug.cgi?id=1118762
> >>>>>> https://bugzilla.redhat.com/show_bug.cgi?id=1337394
> >>>>>>
> >>>>>>
> >>>>>>
> >>>>>>> root at ml-gpu-ser129.nmg01:/mnt/project2/371_37829$ mkdir test-dir
> >>>>>>> root at ml-gpu-ser129.nmg01:/mnt/project2/371_37829$ tree
> >>>>>>> .
> >>>>>>> ??? face_landmarks
> >>>>>>> ?   ??? alive
> >>>>>>> ?       ??? logs_20180823_28
> >>>>>>> ?           ??? info_000000.out
> >>>>>>> ?           ??? info_000001.out
> >>>>>>> ?           ??? info_000002.out
> >>>>>>> ?           ??? info_000003.out
> >>>>>>> ?           ??? info_000004.out
> >>>>>>> ?           ??? info_000005.out
> >>>>>>> ?           ??? info_000006.out
> >>>>>>> ?           ??? info_000007.out
> >>>>>>> ?           ??? info_000008.out
> >>>>>>> ?           ??? info_000009.out
> >>>>>>> ?           ??? info_000010.out
> >>>>>>> ?           ??? info_000011.out
> >>>>>>> ?           ??? info_000012.out
> >>>>>>> ?           ??? info_000013.out
> >>>>>>> ?           ??? info_000014.out
> >>>>>>> ?           ??? info_000015.out
> >>>>>>> ?           ??? info_000016.out
> >>>>>>> ?           ??? info_000017.out
> >>>>>>> ?           ??? info_000018.out
> >>>>>>> ?           ??? info_000019.out
> >>>>>>> ??? test-dir
> >>>>>>>
> >>>>>>> 4 directories, 20 files
> >>>>>>> root at ml-gpu-ser129.nmg01:/mnt/project2/371_37829$ mv
> >>>>>>> face_landmarks/ test-dir/
> >>>>>>> root at ml-gpu-ser129.nmg01:/mnt/project2/371_37829$ tree
> >>>>>>> .
> >>>>>>> ??? test-dir
> >>>>>>>     ??? face_landmarks
> >>>>>>>
> >>>>>>> 2 directories, 0 files
> >>>>>>> root at ml-gpu-ser129.nmg01:/mnt/project2/371_37829$ cd
> >>>>>>> test-dir/face_landmarks/
> >>>>>>> root at ml-gpu-ser129.nmg01
> :/mnt/project2/371_37829/test-dir/face_landmarks$
> >>>>>>> ls
> >>>>>>> root at ml-gpu-ser129.nmg01
> :/mnt/project2/371_37829/test-dir/face_landmarks$
> >>>>>>> cd ..
> >>>>>>> root at ml-gpu-ser129.nmg01:/mnt/project2/371_37829/test-dir$ mv
> >>>>>>> face_landmarks/ ..
> >>>>>>> root at ml-gpu-ser129.nmg01:/mnt/project2/371_37829/test-dir$ cd ..
> >>>>>>> root at ml-gpu-ser129.nmg01:/mnt/project2/371_37829$ tree
> >>>>>>> .
> >>>>>>> ??? face_landmarks
> >>>>>>> ?   ??? alive
> >>>>>>> ??? test-dir
> >>>>>>>
> >>>>>>> 3 directories, 0 files
> >>>>>>> root at ml-gpu-ser129.nmg01:/mnt/project2/371_37829$
> >>>>>>>
> >>>>>>> I think i make some mistakes with volume option, buti i am not
> sure,
> >>>>>>> so how can i find the lost files?  the files seems still int the
> directory,
> >>>>>>> because i cant remove the directory  and rm tell me "Not empty
> directory"
> >>>>>>>
> >>>>>>
> >>>>>> Its likely that src and dst of mv having same gfid and that's
> causing
> >>>>>> the issues. Can you look into both src and dst paths on all bricks?
> Union
> >>>>>> of contents of both directories should give all the files in the src
> >>>>>> directory before mv. Once found you can,
> >>>>>> * keep a backup of contents of src and dst on all bricks
> >>>>>> * remove trusted.gfid xattr on src and dst from all bricks
> >>>>>> * remove gfid handle (.glusterfs/<first two characters of
> >>>>>> gfid>/<second set of two characters of gfid>/<gfid> on each brick)
> >>>>>> * disable readdirplus in entire stack (maybe you can use a tmp mount
> >>>>>> for this) [1]
> >>>>>> * stat src and dst on a mount point with readdirplus disabled.
> >>>>>> * Now you'll see two directories src and dst on mountpoint. You  can
> >>>>>> copy the contents of both into a new directory
> >>>>>>
> >>>>>> [1] https://lists.gluster.org/pipermail/gluster-users/2017-March
> >>>>>> /030148.html
> >>>>>>
> >>>>>>
> >>>>>>>
> >>>>>>> Any suggestions is appreciated.
> >>>>>>> Many Thanks
> >>>>>>>
> >>>>>>> Best regards
> >>>>>>> Yu
> >>>>>>>
> >>>>>>> _______________________________________________
> >>>>>>> Gluster-users mailing list
> >>>>>>> Gluster-users at gluster.org
> >>>>>>> https://lists.gluster.org/mailman/listinfo/gluster-users
> >>>>>>>
> >>>>>>
> >>>>>>
> >>>>
> >>> _______________________________________________
> >>> Gluster-users mailing list
> >>> Gluster-users at gluster.org
> >>> https://lists.gluster.org/mailman/listinfo/gluster-users
> >>>
> >>
> >>
> >
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <
> http://lists.gluster.org/pipermail/gluster-users/attachments/20180905/b869cf39/attachment-0001.html
> >
>
> ------------------------------
>
> Message: 2
> Date: Wed, 5 Sep 2018 10:21:51 +0000
> From: Krishna Verma <kverma at cadence.com>
> To: Gluster Users <gluster-users at gluster.org>
> Subject: [Gluster-users] Master-Slave-Replicated Volume size is
>         different
> Message-ID:
>         <
> BY2PR07MB9683D3DB74BD7A1F3AAA04CD8020 at BY2PR07MB968.namprd07.prod.outlook.com
> >
>
> Content-Type: text/plain; charset="us-ascii"
>
> Hi All,
>
> I have a geo replication setup of 2*1 with replicated volume.
>
> The replicated volume is showing difference in "used size" between master
> and slave.
>
> At master "/repvol" showing only 2% use.
>
> [root at noi-foreman02 repvol]# df -hT /repvol
> Filesystem                   Type            Size  Used Avail Use% Mounted
> on
> gluster-poc-noida:/glusterep fuse.glusterfs   30G  402M   28G   2% /repvol
> [root at noi-foreman02 repvol]#
>
> Whereas at Slave it shows 71% use.
>
> [root at sj-kverma repvol]# df -hT /repvol
> Filesystem               Type            Size  Used Avail Use% Mounted on
> gluster-poc-sj:glusterep fuse.glusterfs   30G   20G  8.2G  71% /repvol
> You have new mail in /var/spool/mail/root
> [root at sj-kverma repvol]#
>
> Could anyone please help to replicated it correctly with actual size.
> There is no data at master and slave.
>
> /Krishna
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <
> http://lists.gluster.org/pipermail/gluster-users/attachments/20180905/2ab493bd/attachment-0001.html
> >
>
> ------------------------------
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users
>
> End of Gluster-users Digest, Vol 125, Issue 8
> *********************************************
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20180905/53506c5a/attachment.html>


More information about the Gluster-users mailing list