[Gluster-users] Sync data

shwetha spandura at redhat.com
Tue Nov 26 03:18:32 UTC 2013


1.

Option: cluster.read-subvolume
Default Value: (null)
Description: inode-read fops happen only on one of the bricks in 
replicate. Afr will prefer the one specified using this option if it is 
not stale. Option value must be one of the xlator names of the children. 
Ex: <volname>-client-0 till <volname>-client-<number-of-bricks - 1>

For any volume option information execute "gluster volume set help"

2. " find . " will get fetch all the directories entries of the mount. 
When we execute stat on each of file and directory it gets the stat 
structure of the file.

In your case we set the "cluster.read-subvolume" to node-1 is because 
node-1 has all the data and the read has to happen from node-1 so that 
1) GFID is assigned to the file/directory ( since your files/directories 
were created from backend and doesn't have any GFID's yet ) 2) when stat 
is performed on the file/directory the file/directory is self-healed to 
node-2.

When you use "cluster.read-subvolume"  all reads will be going to node-1 
. Hence suggested to you reset this option as soon as the self-heal of 
data is complete.

-Shwetha

On 11/25/2013 05:40 PM, Raphael Rabelo wrote:
> Hi guys!
>
> shwetha,
>
> Just to understand your sugestion:
>
> 1. What meaning "cluster.read-subvolume"? I searching something about 
> it, but didn't find anything...
> 2. Why i need to exec "xarg stat" to force the gluster "read" the file?
>
> My volume has about 1.2Tb of used space, i can't stop the read 'cause 
> my application write/read in a real time...
>
> Thanks!
>
>
> 2013/11/22 Raphael Rabelo <rabeloo at gmail.com <mailto:rabeloo at gmail.com>>
>
>     Thank you!
>
>     I'll try this!
>
>     Best regards.
>
>
>     2013/11/22 shwetha <spandura at redhat.com <mailto:spandura at redhat.com>>
>
>         Hi Raphael,
>
>         Following are the steps to be followed to sync data from node1
>         to node2 assuming node1 has the data.
>
>         1. gluster peer probe node1
>
>         2. gluster volume create gv0 replica 2 node1:/data node2:/data
>
>         3. gluster volume start gv0
>
>         4. gluster volume set gv0 cluster.read-subvolume gv0-client-0
>
>         5. Create a fuse mount from the client node to the volume :
>         mount -t glusterfs node1:/gv0 /mnt/gv0
>
>         6. From client node : cd /mnt/gv0 ;  find . | xargs stat
>
>         This should self-heal all your data from node1 to node2.
>
>         Once the self-heal is complete reset the
>         "cluster.read-subvolume" volume option.
>
>         7. gluster volume reset gv0 cluster.read-subvolume
>
>         8. Unmount the mount point and remount for using it again.
>
>         Regards
>         -Shwetha
>
>
>         On 11/21/2013 08:51 PM, Raphael Rabelo wrote:
>>         I guys!
>>
>>
>>         i have 2 servers in replicate mode, the node 1 has all data,
>>         and the cluster 2 is empty.
>>         I created a volume (gv0) and start it.
>>
>>         Now, how can I synchronize all files on the node 1 by the
>>         node 2 ?
>>
>>         Steps that I followed:
>>
>>         gluster peer probe node1
>>         gluster volume create gv0 replica 2 node1:/data node2:/data
>>         gluster volume gvo start
>>
>>
>>         thanks!
>>
>>
>>         _______________________________________________
>>         Gluster-users mailing list
>>         Gluster-users at gluster.org  <mailto:Gluster-users at gluster.org>
>>         http://supercolony.gluster.org/mailman/listinfo/gluster-users
>
>
>
>


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20131126/45a7f005/attachment.html>


More information about the Gluster-users mailing list