[Gluster-users] about tail command

Anuradha Talur atalur at redhat.com
Thu Mar 3 07:14:54 UTC 2016



----- Original Message -----
> From: "Anuradha Talur" <atalur at redhat.com>
> To: "songxin" <songxin_1980 at 126.com>
> Cc: "gluster-user" <gluster-users at gluster.org>
> Sent: Thursday, March 3, 2016 12:31:41 PM
> Subject: Re: [Gluster-users] about tail command
> 
> 
> 
> ----- Original Message -----
> > From: "songxin" <songxin_1980 at 126.com>
> > To: "Anuradha Talur" <atalur at redhat.com>
> > Cc: "gluster-user" <gluster-users at gluster.org>
> > Sent: Wednesday, March 2, 2016 4:09:01 PM
> > Subject: Re:Re: [Gluster-users] about tail command
> > 
> > 
> > 
> > Thank you for your reply.I have two more questions as below
> > 
> > 
> > 1. the command "gluster v replace-brick " is async or sync?  The replace is
> > complete when the command quit ?
> It is a sync command, replacing the brick finishes as the command returns.
> 
> In one of the earlier mails I gave incomplete command for replace brick,
> sorry about that.
> The only replace-brick operation allowed from glusterfs 3.7.9 onwards is
> 'gluster v replace-brick <volname> <hostname:src_brick> <hostname:dst_brick>
> commit force'.
Sorry for spamming, but there is a typo here, I meant glusterfs 3.7.0 onwards,
not 3.7.9.
> > 2.I run "tail -n 0" on mount point.Does it trigger the heal?
> > 
> > 
> > Thanks,
> > Xin
> > 
> > 
> > 
> > 
> > 
> > 
> > 
> > At 2016-03-02 15:22:35, "Anuradha Talur" <atalur at redhat.com> wrote:
> > >
> > >
> > >----- Original Message -----
> > >> From: "songxin" <songxin_1980 at 126.com>
> > >> To: "gluster-user" <gluster-users at gluster.org>
> > >> Sent: Tuesday, March 1, 2016 7:19:23 PM
> > >> Subject: [Gluster-users] about tail command
> > >> 
> > >> Hi,
> > >> 
> > >> recondition:
> > >> A node:128.224.95.140
> > >> B node:128.224.162.255
> > >> 
> > >> brick on A node:/data/brick/gv0
> > >> brick on B node:/data/brick/gv0
> > >> 
> > >> 
> > >> reproduce steps:
> > >> 1.gluster peer probe 128.224.162.255 (on A node)
> > >> 2. gluster volume create gv0 replica 2 128.224.95.140:/data/brick/gv0
> > >> 128.224.162.255:/data/brick/gv0 force (on A node)
> > >> 3.gluster volume start gv0 (on A node)
> > >> 4. mount -t glusterfs 128.224.95.140:/gv0 gluster (on A node)
> > >> 5.create some files (a,b,c) in dir gluster (on A node)
> > >> 6.shutdown the B node
> > >> 7.change the files (a,b,c) in dir gluster (on A node)
> > >> 8.reboot B node
> > >> 9.start glusterd on B node but glusterfsd is offline (on B node)
> > >> 10. gluster volume remove-brick gv0 replica 1
> > >> 128.224.162.255:/data/brick/gv0
> > >> force (on A node)
> > >> 11. gluster volume add-brick gv0 replica 2
> > >> 128.224.162.255:/data/brick/gv0
> > >> force (on A node)
> > >> 
> > >> Now the files are not same between two brick
> > >> 
> > >> 12." gluster volume heal gv0 info " show entry num is 0 (on A node)
> > >> 
> > >> Now What I should do if I want to sync file(a,b,c) on two brick?
> > >> 
> > >Currently, once you add a brick to a cluster, files won't sync
> > >automatically.
> > >Patch has been sent to handle this requirement. Auto-heal will be
> > >available
> > >soon.
> > >
> > >You could kill the newly added brick and perform the following operations
> > >from mount
> > >for the sync to start :
> > >1) create a directory <dirname>
> > >2) setfattr -n "user.dirname" -v "value" <dirname>
> > >3) delete the directory <dirname>
> > >
> > >Once these steps are done, start the killed brick. self-heal-daemon will
> > >heal the files.
> > >
> > >But, for the case you have mentioned, why are you removing brick and using
> > >add-brick again?
> > >Is it because you don't want to change the brick-path?
> > >
> > >You could use "replace-brick" command.
> > >gluster v replace-brick <volname> <hostname:old-brick-path>
> > ><hostname:new-brick-path>
> > >Note that source and destination should be different for this command to
> > >work.
> > >
> > >> I know the "heal full" can work , but I think the command take too long
> > >> time.
> > >> 
> > >> So I run "tail -n 1 file" to all file on A node, but some files are sync
> > >> but
> > >> some files are not.
> > >> 
> > >> My question is below:
> > >> 1.Why the tail can't sync all files?
> > >Did you run the tail command on mount point or from the backend (bricks)?
> > >If you run from bricks, sync won't happen. Was client-side healing on?
> > >To check if they were on or off, run `gluster v get volname all | grep
> > >self-heal`, cluster.metadata-self-heal, cluster.data-self-heal,
> > >cluster.entry-self-heal should be on.
> > >
> > >> 2.Can the command "tail -n 1 filename" trigger selfheal, just like "ls
> > >> -l
> > >> filename"?
> > >> 
> > >> Thanks,
> > >> Xin
> > >> 
> > >> 
> > >> 
> > >> 
> > >> 
> > >> 
> > >> 
> > >> 
> > >> 
> > >> 
> > >> 
> > >> _______________________________________________
> > >> Gluster-users mailing list
> > >> Gluster-users at gluster.org
> > >> http://www.gluster.org/mailman/listinfo/gluster-users
> > >
> > >--
> > >Thanks,
> > >Anuradha.
> > 
> 
> --
> Thanks,
> Anuradha.
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
> 

-- 
Thanks,
Anuradha.


More information about the Gluster-users mailing list