[Gluster-users] Double counting of quota
Alessandro De Salvo
Alessandro.DeSalvo at roma1.infn.it
Sat Jun 6 00:29:26 UTC 2015
Hi,
just to answer to myself, it really seems the temp files from rsync are the culprit, it seems that their size are summed up to the real contents of the directories I’m synchronizing, or in other terms their size is not removed from the used size after they are removed. I suppose this is someway connected to the error on removexattr I’m seeing. The temporary solution I’ve found is to use rsync with the option to write the temp files to /tmp, but it would be very interesting to understand why this is happening.
Cheers,
Alessandro
> Il giorno 06/giu/2015, alle ore 01:19, Alessandro De Salvo <Alessandro.DeSalvo at roma1.infn.it> ha scritto:
>
> Hi,
> I currently have two brick with replica 2 on the same machine, pointing to different disks of a connected SAN.
> The volume itself is fine:
>
> # gluster volume info atlas-home-01
>
> Volume Name: atlas-home-01
> Type: Replicate
> Volume ID: 660db960-31b8-4341-b917-e8b43070148b
> Status: Started
> Number of Bricks: 1 x 2 = 2
> Transport-type: tcp
> Bricks:
> Brick1: host1:/bricks/atlas/home02/data
> Brick2: host2:/bricks/atlas/home01/data
> Options Reconfigured:
> performance.write-behind-window-size: 4MB
> performance.io-thread-count: 32
> performance.readdir-ahead: on
> server.allow-insecure: on
> nfs.disable: true
> features.quota: on
> features.inode-quota: on
>
>
> However, when I set a quota on a dir of the volume the size show is twice the physical size of the actual dir:
>
> # gluster volume quota atlas-home-01 list /user1
> Path Hard-limit Soft-limit Used Available Soft-limit exceeded? Hard-limit exceeded?
> ---------------------------------------------------------------------------------------------------------------------------
> /user1 4.0GB 80% 3.2GB 853.4MB No No
>
> # du -sh /storage/atlas/home/user1
> 1.6G /storage/atlas/home/user1
>
> If I remove one of the bricks the quota shows the correct value.
> Is there any double counting in case the bricks are on the same machine?
> Also, I see a lot of errors in the logs like the following:
>
> [2015-06-05 21:59:27.450407] E [posix-handle.c:157:posix_make_ancestryfromgfid] 0-atlas-home-01-posix: could not read the link from the gfid handle /bricks/atlas/home01/data/.glusterfs/be/e5/bee5e2b8-c639-4539-a483-96c19cd889eb (No such file or directory)
>
> and also
>
> [2015-06-05 22:52:01.112070] E [marker-quota.c:2363:mq_mark_dirty] 0-atlas-home-01-marker: failed to get inode ctx for /user1/file1
>
> When running rsync I also see the following errors:
>
> [2015-06-05 23:06:22.203968] E [marker-quota.c:2601:mq_remove_contri] 0-atlas-home-01-marker: removexattr trusted.glusterfs.quota.fddf31ba-7f1d-4ba8-a5ad-2ebd6e4030f3.contri failed for /user1/..bashrc.O4kekp: No data available
>
> Those files are the temp files of rsync, I’m not sure why the throw errors in glusterfs.
> Any help?
> Thanks,
>
> Alessandro
>
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
-------------- next part --------------
A non-text attachment was scrubbed...
Name: smime.p7s
Type: application/pkcs7-signature
Size: 1770 bytes
Desc: not available
URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20150606/3d536de2/attachment.p7s>
More information about the Gluster-users
mailing list