[Gluster-devel] Volume usage mismatch problem

Manikandan Selvaganesh mselvaga at redhat.com
Tue Feb 2 09:57:39 UTC 2016


Hi,

The size was not immediately updated because the accounting process done by
marker happens asynchronously and it will take some in updating the size. Also, you
have mentioned the size got updated after an ls command. Probably, the
problem could be an fsync(flushing) that could have failed at that time and the size 
was hence not properly updated.

--
Thanks & Regards,
Manikandan Selvaganesh.

----- Original Message -----
From: "박성식" <mulgo79 at gmail.com>
To: "Manikandan Selvaganesh" <mselvaga at redhat.com>
Sent: Tuesday, February 2, 2016 1:22:54 PM
Subject: Re: [Gluster-devel] Volume usage mismatch problem

Thank you for answer.

After writed 500MB file volume usage it is not updated.

root at CLIENT: # dd if=/dev/urandom of=/mnt/500m.1 bs=1048576 count=500
root at CLIENT: # df -k /mnt
38.38.38.101:/tvol1      104857600  491776 104365824   1% /mnt

root at SERVER: # getfattr -d -m . -e hex /tpool/tvol1/500m.1
getfattr: Removing leading '/' from absolute path names
# file: tpool/tvol1/500m.1
trusted.bit-rot.version=0x020000000000000056af8f500004b5e6
trusted.gfid=0xfb1159d18ff4476bac1b9e2f49ffe348
trusted.glusterfs.quota.00000000-0000-0000-0000-000000000001.contri.1=0x000000001b510e000000000000000001
trusted.pgfid.00000000-0000-0000-0000-000000000001=0x00000001

0x000000001b510e000000000000000001 <- NOT 500MB (After 'ls command
executed' the information is updated.)

root at CLIENT: # ls -al /mnt
total 512006
drwxr-xr-x  4 root root         9 Feb  2 07:43 .
drwxr-xr-x 22 root root      4096 Jan 28 07:48 ..
-rw-r--r--  1 root root 524288000 Feb  2 07:44 500m.1
drwxr-xr-x  3 root root         6 Feb  2 07:42 .trashcan

root at CLIENT: # df -k /mnt
38.38.38.101:/tvol1      104857600  512000 104345600   1% /mnt

root at CLIENT: # getfattr -d -m . -e hex /tpool/tvol1/500m.1
getfattr: Removing leading '/' from absolute path names
# file: tpool/tvol1/500m.1
trusted.bit-rot.version=0x020000000000000056afdf7a000220b0
trusted.gfid=0x1f0868d8f78541f8bca26eb83138c991
trusted.glusterfs.quota.00000000-0000-0000-0000-000000000001.contri.1=0x000000001f4000000000000000000001
<- 500MB OK
trusted.pgfid.00000000-0000-0000-0000-000000000001=0x00000001

Is zfs and compatibility issues? (The xfs everything is OK.)

Thanks.

-- 

Sungsik, Park/corazy [박성식, 朴成植]

Software Development Engineer

Email: mulgo79 at gmail.com


----------------------------------------------------------------------------------------

This email may be confidential and protected by legal privilege.

If you are not the intended recipient, disclosure, copying, distribution

and use are prohibited; please notify us immediately and delete this copy

from your system.

----------------------------------------------------------------------------------------

On Tue, Feb 2, 2016 at 3:24 PM, Manikandan Selvaganesh <mselvaga at redhat.com>
wrote:

> Hi,
>
> Please find my comments inline.
>
> > Hi all
> >
> > Gluster-3.7.6 in 'Quota' problem occurs in the following test case.
> >
> > (it doesn't occur if don't enable the volume quota)
> >
> > Volume usage mismatch occurs when using glusterfs-3.7.6 on ZFS.
> >
> > Can you help with the following problems?
> >
> >
> > 1. zfs disk pool information
> >
> > root at server-1:~# zpool status
> > pool: pool
> > state: ONLINE
> > scan: none requested
> > config:
> >
> > NAME STATE READ WRITE CKSUM
> > pool ONLINE 0 0 0
> > pci-0000:00:10.0-scsi-0:0:1:0 ONLINE 0 0 0
> > pci-0000:00:10.0-scsi-0:0:2:0 ONLINE 0 0 0
> > pci-0000:00:10.0-scsi-0:0:3:0 ONLINE 0 0 0
> >
> > errors: No known data errors
> >
> > root at server-2:~# zpool status
> > pool: pool
> > state: ONLINE
> > scan: none requested
> > config:
> >
> > NAME STATE READ WRITE CKSUM
> > pool ONLINE 0 0 0
> > pci-0000:00:10.0-scsi-0:0:1:0 ONLINE 0 0 0
> > pci-0000:00:10.0-scsi-0:0:2:0 ONLINE 0 0 0
> > pci-0000:00:10.0-scsi-0:0:3:0 ONLINE 0 0 0
> >
> > errors: No known data errors
> >
> > 2. zfs volume list information
> >
> > root at server-1:~# zfs list
> > NAME USED AVAIL REFER MOUNTPOINT
> > pool 179K 11.3T 19K /pool
> > pool/tvol1 110K 11.3T 110K /pool/tvol1
> >
> > root at server-2:~# zfs list
> > NAME USED AVAIL REFER MOUNTPOINT
> > pool 179K 11.3T 19K /pool
> > pool/tvol1 110K 11.3T 110K /pool/tvol1
> >
> > 3. gluster volume information
> >
> > root at server-1:~# gluster volume info
> > Volume Name: tvol1
> > Type: Distribute
> > Volume ID: 02d4c9de-e05f-4177-9e86-3b9b2195d7ab
> > Status: Started
> > Number of Bricks: 2
> > Transport-type: tcp
> > Bricks:
> > Brick1: 38.38.38.101:/pool/tvol1
> > Brick2: 38.38.38.102:/pool/tvol1
> > Options Reconfigured:
> > features.quota-deem-statfs: on
> > features.inode-quota: on
> > features.quota: on
> > performance.readdir-ahead: on
>
> In the 'gluster volume info', you could find a feature, quota-deem-statfs
> which is turned 'on' by default. When this is turned on, when you do a df,
> it will list the usage by taking quota limits into consideration rather
> then
> displaying the actual disk space.
> To be simple even if disk space on /pool is 20GB and you have set a quota
> limit
> on /pool as 10GB and when you try to do quota list or a df on /pool, it
> will
> show Available space as 10GB because quota-deem-statfs is turned on. If you
> want disk space to show exactly without taking quota limits into
> consideration,
> please turn off deem statfs using
> 'gluster volume set VOLNAME quota-deem-statfs off'
>
> > 4. gluster volume quota list
> >
> > root at server-1:~# gluster volume quota tvol1 list
> > Path Hard-limit Soft-limit Used Available Soft-limit exceeded?
> Hard-limit exceeded?
> >
> -------------------------------------------------------------------------------------------------------------------------------
> > / 100.0GB 80%(80.0GB) 0Bytes 100.0GB No No
> >
> > 5. brick disk usage
> >
> > root at server-1:~# df -k
> > Filesystem 1K-blocks Used Available Use% Mounted on
> > pool 12092178176 0 12092178176 0% /pool
> > pool/tvol1 12092178304 128 12092178176 1% /pool/tvol1
> > localhost:tvol1 104857600 0 104857600 0% /run/gluster/tvol1
>
> pool/tvol1 is your brick and it is showing 128 as used space.
> It is because in the backend, you have folders like .glusterfs and so on
> which shows used space as 128. From the mountpoint, localhost:tvol1 there
> are no files created and hence it is showing as 0. Also, there would be
> some change in the way zfs and glusterfs quota accounts.
>
> >
> > root at server-2:~# df -k
> > Filesystem 1K-blocks Used Available Use% Mounted on
> > pool 12092178176 0 12092178176 0% /pool
> > pool/tvol1 12092178304 128 12092178176 1% /pool/tvol1
> >
> > 6. client mount / disk usage
> >
> > root at client:~# mount -t glusterfs 38.38.38.101:/tvol1 /mnt
> > root at client:~# df -k
> > Filesystem 1K-blocks Used Available Use% Mounted on
> > 38.38.38.101:/tvol1 104857600 0 104857600 0% /mnt
>
> As you can see there are no files created from the mountpoint and quota
> accounts only for the files created from the mountpoint and hence the used
> space is 0.
>
> > 7. Write using the dd command file
> >
> > root at client:~# dd if=/dev/urandom of=/mnt/10m bs=1048576 count=10
> > 10+0 records in
> > 10+0 records out
> > 10485760 bytes (10 MB) copied, 0.751261 s, 14.0 MB/s
> >
> > 8. client disk usage
> >
> > root at client:~# df -k
> > Filesystem 1K-blocks Used Available Use% Mounted on
> > 38.38.38.101:/tvol1 104857600 0 104857600 0% /mnt
>
> You have written a file of size of 10MB. The process of accounting
> by marker happens asynchronously and it won't be in effect immediately,
> you could expect a delay in updation.
>
> > 9. brick disk usage
> >
> > root at server-1:~# df -k
> > Filesystem 1K-blocks Used Available Use% Mounted on
> > pool 12092167936 0 12092167936 0% /pool
> > pool/tvol1 12092178304 10368 12092167936 1% /pool/tvol1
> > localhost:tvol1 104857600 0 104857600 0% /run/gluster/tvol1
> >
> > root at server-2:~# df -k
> > Filesystem 1K-blocks Used Available Use% Mounted on
> > pool 12092178176 0 12092178176 0% /pool
> > pool/tvol1 12092178304 128 12092178176 1% /pool/tvol1
> >
> > 10. zfs volume list information
> >
> > root at server-1:~# zfs list
> > NAME USED AVAIL REFER MOUNTPOINT
> > pool 10.2M 11.3T 19K /pool
> > pool/tvol1 10.1M 11.3T 10.1M /pool/tvol1
> >
> > root at server-2:~# zfs list
> > NAME USED AVAIL REFER MOUNTPOINT
> > pool 188K 11.3T 19K /pool
> > pool/tvol1 110K 11.3T 110K /pool/tvol1
> >
> > 11. gluster volume quota list
> >
> > root at server-1:~# gluster volume quota tvol1 list
> > Path Hard-limit Soft-limit Used Available Soft-limit exceeded?
> Hard-limit exceeded?
> >
> -------------------------------------------------------------------------------------------------------------------------------
> > / 100.0GB 80%(80.0GB) 512Bytes 100.0GB No No
> >
> > 12. Views from the client file
> >
> > root at client:~# ls -al /mnt
> > total 10246
> > drwxr-xr-x 4 root root 9 1월 30 02:23 .
> > drwxr-xr-x 22 root root 4096 1월 28 07:48 ..
> > -rw-r--r-- 1 root root 10485760 1월 30 02:23 10m
> > drwxr-xr-x 3 root root 6 1월 30 02:14 .trashcan
> >
> > root at client:~# df -k
> > Filesystem 1K-blocks Used Available Use% Mounted on
> > 38.38.38.101:/tvol1 104857600 10240 104847360 1% /mnt
> > root at server-1:~# gluster volume quota tvol1 list
> > Path Hard-limit Soft-limit Used Available Soft-limit exceeded?
> Hard-limit exceeded?
> >
> -------------------------------------------------------------------------------------------------------------------------------
> > / 100.0GB 80%(80.0GB) 10.0MB 100.0GB No No
>
> Now, as you could see, the accounting has been done and it shows the used
> space
> as 10.0MB as expected.
>
> >
> > root at server-1:~# gluster volume quota tvol1 list
> > Path Hard-limit Soft-limit Used Available Soft-limit exceeded?
> Hard-limit exceeded?
> >
> -------------------------------------------------------------------------------------------------------------------------------
> > / 100.0GB 80%(80.0GB) 10.0MB 100.0GB No No
> >
>
> Please reply us back if you have few more queries or things are not clear.
>
> >
> > --
> >
> > Sungsik, Park/corazy [박성식, 朴成植]
> >
> > Software Development Engineer
> >
> > Email: mulgo79 at gmail.com
> >
>
> --
> Thanks & Regards,
> Manikandan Selvaganesh.
>
>
>
>
> ----------------------------------------------------------------------------------------
>
>
> This email may be confidential and protected by legal privilege.
>
> If you are not the intended recipient, disclosure, copying, distribution
>
> and use are prohibited; please notify us immediately and delete this copy
>
> from your system.
>
>
> ----------------------------------------------------------------------------------------
>
> _______________________________________________
> Gluster-devel mailing list
> Gluster-devel at gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-devel
>


More information about the Gluster-devel mailing list