<div dir="ltr"><div>nufa helps you write to local brick, if replication is involved it will still copy it to other bricks (or suppose to do so)</div><div>what might be happening is that when initial file was created other nodes were down and it didn't replicate properly and now heal is failing</div><div>check your <br></div><div>gluster vol heal Volname info</div><div><br></div><div>maybe you will find out where second copy of the file suppose to be - and just copy it to that brick<br></div></div><br><div class="gmail_quote"><div dir="ltr">On Sun, Oct 28, 2018 at 6:07 PM Ingo Fischer <<a href="mailto:nl@fischer-ka.de">nl@fischer-ka.de</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Hi All,<br>
<br>
has noone an idea on system.affinity/distributed.migrate-data ?<br>
Or how to correctly enable nufa?<br>
<br>
BTW: the used gluster version is 4.1.5<br>
<br>
Thank you for your help on this!<br>
<br>
Ingo<br>
<br>
Am 24.10.18 um 12:54 schrieb Ingo Fischer:<br>
> Hi,<br>
> <br>
> I have setup a glusterfs volume gv0 as distributed/replicated:<br>
> <br>
> root@pm1:~# gluster volume info gv0<br>
> <br>
> Volume Name: gv0<br>
> Type: Distributed-Replicate<br>
> Volume ID: 64651501-6df2-4106-b330-fdb3e1fbcdf4<br>
> Status: Started<br>
> Snapshot Count: 0<br>
> Number of Bricks: 3 x 2 = 6<br>
> Transport-type: tcp<br>
> Bricks:<br>
> Brick1: 192.168.178.50:/gluster/brick1/gv0<br>
> Brick2: 192.168.178.76:/gluster/brick1/gv0<br>
> Brick3: 192.168.178.50:/gluster/brick2/gv0<br>
> Brick4: 192.168.178.81:/gluster/brick1/gv0<br>
> Brick5: 192.168.178.50:/gluster/brick3/gv0<br>
> Brick6: 192.168.178.82:/gluster/brick1/gv0<br>
> Options Reconfigured:<br>
> performance.client-io-threads: off<br>
> nfs.disable: on<br>
> transport.address-family: inet<br>
> <br>
> <br>
> root@pm1:~# gluster volume status<br>
> Status of volume: gv0<br>
> Gluster process TCP Port RDMA Port Online Pid<br>
> ------------------------------------------------------------------------------<br>
> Brick 192.168.178.50:/gluster/brick1/gv0 49152 0 Y<br>
> 1665<br>
> Brick 192.168.178.76:/gluster/brick1/gv0 49152 0 Y<br>
> 26343<br>
> Brick 192.168.178.50:/gluster/brick2/gv0 49153 0 Y<br>
> 1666<br>
> Brick 192.168.178.81:/gluster/brick1/gv0 49152 0 Y<br>
> 1161<br>
> Brick 192.168.178.50:/gluster/brick3/gv0 49154 0 Y<br>
> 1679<br>
> Brick 192.168.178.82:/gluster/brick1/gv0 49152 0 Y<br>
> 1334<br>
> Self-heal Daemon on localhost N/A N/A Y<br>
> 5022<br>
> Self-heal Daemon on 192.168.178.81 N/A N/A Y<br>
> 935<br>
> Self-heal Daemon on 192.168.178.82 N/A N/A Y<br>
> 1057<br>
> Self-heal Daemon on pm2.fritz.box N/A N/A Y<br>
> 1651<br>
> <br>
> <br>
> I use the fs to store VM files, so not many, but big files.<br>
> <br>
> The distribution now put 4 big files on one brick set and only one file<br>
> on an other. This means that the one brick set it "overcommited" now as<br>
> soon as all VMs using max space. SO I would like to manually<br>
> redistribute the files a bit better.<br>
> <br>
> After log googling I found that the following should work:<br>
> setfattr -n 'system.affinity' -v $location $filepath<br>
> setfattr -n 'distribute.migrate-data' -v 'force' $filepath<br>
> <br>
> But I have problems with it because it gives errors or doing nothing at all.<br>
> <br>
> The mounting looks like:<br>
> 192.168.178.50:gv0 on /mnt/pve/glusterfs type fuse.glusterfs<br>
> (rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072)<br>
> <br>
> <br>
> Here is what I tried for the first xattr:<br>
> <br>
> root@pm1:~# setfattr -n 'system.affinity' -v 'gv0-client-5'<br>
> /mnt/pve/glusterfs/201/imagesvm201.qcow2<br>
> setfattr: /mnt/pve/glusterfs/201/imagesvm201.qcow2: Operation not supported<br>
> <br>
> So I found on google to use trusted.affinity instead and yes this works.<br>
> I'm only not sure if the location "gv0-client-5" is correct to move the<br>
> file to "Brick 5" from "gluster volume info gv0" ... or how this<br>
> location is build?<br>
> Commit Message from <a href="http://review.gluster.org/#/c/glusterfs/+/5233/" rel="noreferrer" target="_blank">http://review.gluster.org/#/c/glusterfs/+/5233/</a> says<br>
>> The value is the internal client or AFR brick name where you want the<br>
> file to be.<br>
> <br>
> So what do I need to set there? maybe I do need the "afr" because<br>
> replicated? But where to get that name from?<br>
> I also tried to enter other client or replicate names like<br>
> "gv0-replicate-0" or such which seems to be more fitting for a<br>
> replicated volume, but result the same.<br>
> <br>
> <br>
> For the second command I get:<br>
> root@pm1:~# setfattr -n 'distribute.migrate-data' -v 'force'<br>
> /mnt/pve/glusterfs/201/imagesvm201.qcow2<br>
> setfattr: /mnt/pve/glusterfs/images/201/vm-201-disk-0.qcow2: Operation<br>
> not supported<br>
> root@pm1:~# setfattr -n 'trusted.distribute.migrate-data' -v 'force'<br>
> /mnt/pve/glusterfs/201/imagesvm201.qcow2<br>
> setfattr: /mnt/pve/glusterfs/images/201/vm-201-disk-0.qcow2: File exists<br>
> <br>
> I also experimented with other "names" then "gv0-client-5" above but<br>
> always the same.<br>
> <br>
> I saw that instead of the second command I could start a rebalance with<br>
> force, but this also did nothing. Ended after max1 second and moved nothing.<br>
> <br>
> Can someone please advice how to do it right?<br>
> <br>
> <br>
> An other idea was to enable nufa and kind of "re-copy" the files on the<br>
> glusterfs, but here it seems that the documentation is wrong.<br>
> gluster volume set gv0 cluster.nufa enable on<br>
> <br>
> Is<br>
> <br>
> gluster volume set gv0 cluster.nufa 1<br>
> <br>
> correct?<br>
> <br>
> Thank you very much!<br>
> <br>
> Ingo<br>
> <br>
_______________________________________________<br>
Gluster-users mailing list<br>
<a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a><br>
<a href="https://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">https://lists.gluster.org/mailman/listinfo/gluster-users</a><br>
</blockquote></div>