[Gluster-users] Question about glusterfs quotas on debian wheezy?

Hristo Hristov id at yaht.net
Thu Apr 19 14:21:33 UTC 2012


Hello list,

I'm experimenting with a little GlusterFS cluster on debian wheezy:

=== snip ===
muzzy:~# cat /etc/debian_version
wheezy/sid

muzzy:~# dpkg -l | grep gluster
ii glusterfs-client 3.2.6-1 clustered file-system (client package)
ii glusterfs-common 3.2.6-1 GlusterFS common libraries and translator 
modules
ii glusterfs-server 3.2.6-1 clustered file-system (server package)
=== snip ===

My volume has 3 bricks and is a distributed one:

=== snip ===
muzzy:~# gluster volume info

Volume Name: m3d
Type: Distribute
Status: Started
Number of Bricks: 3
Transport-type: tcp
Bricks:
Brick1: g0:/g0
Brick2: g1:/g1
Brick3: g2:/g2
Options Reconfigured:
features.quota-timeout: 0
features.limit-usage: 
/1Gg:1GB,/10Gg:10GB,/5Gg:5GB,/512Mb:512MB,/dav/256MB:256MB,/dav/1GB:1GB,/dav/5GB:5GB,/dav/10GB:10GB,/cifs/256Mb:256MB,/dav/128mb:128MB,/dav/192mb:192MB,/cifs/192mb:192MB,/cifs/64mb:64MB
features.quota: on
auth.allow: 194.12.*,130.204.198.50
=== snip ===

One of nodes, (g0), also mounts m3d volume (mount -t glusterfs) and 
"exports" it via samba and webdav. I'm using two different directories 
under volume root for each of them, i.e. dav and cifs. /This is not the 
best way to do export the volume, but I do not have more nodes at the 
moment :(/

I'm mounting the volume with the following parameters:
=== snip ===
mount -t glusterfs g0:/m3d /mnt/m3d -o 
acl,log-level=WARNING,log-file=/var/log/gluster.log
=== snip ===

According the glusterfs documentation 
(Gluster_File_System-3.2.5-Administration_Guide-en-US.pdf, p. 67) we can 
set directory limit even on non-existing directory:
=== quote ===
Note
You can set the disk limit on the directory even if it is not created. 
The disk limit is enforced
immediately after creating that directory. For more information on 
setting disk limit, see
Section 10.3, “Setting or Replacing Disk Limit ”.
=== quote ===

Here comes my problem. When I set directory quota on non-existing (or 
even existing directory) the quota limit is not enforced. I have to 
unmount the volume (i.e. stop apache and samba daemons before that) and 
mount it again (respectively starting back apache and samba daemons). 
Then quota limit is enforced correctly on existing directories. If I 
create a new directory and set quota limit, i have to umount and mount 
again in order to have working quota for that directory.

Do you have any ideas, what may cause this problem? Will it be 
appropriate to blame Debian Wheezy :) /because I'm using It's testing 
branch/?

Here is more info about my testing glusterfs installation:
=== snip ===
muzzy:~# cat /etc/debian_version
wheezy/sid

muzzy:~# uname -a
Linux muzzy 3.2.0-1-amd64 #1 SMP Fri Feb 17 05:17:36 UTC 2012 x86_64 
GNU/Linux

muzzy:~# cat /var/log/gluster.log
2012-04-19 16:36:17.663509] W [write-behind.c:3023:init] 
0-m3d-write-behind: disabling write-behind for first 0 bytes
Given volfile:
+------------------------------------------------------------------------------+
1: volume m3d-client-0
2: type protocol/client
3: option remote-host g0
4: option remote-subvolume /g0
5: option transport-type tcp
6: end-volume
7:
8: volume m3d-client-1
9: type protocol/client
10: option remote-host g1
11: option remote-subvolume /g1
12: option transport-type tcp
13: end-volume
14:
15: volume m3d-client-2
16: type protocol/client
17: option remote-host g2
18: option remote-subvolume /g2
19: option transport-type tcp
20: end-volume
21:
22: volume m3d-dht
23: type cluster/distribute
24: subvolumes m3d-client-0 m3d-client-1 m3d-client-2
25: end-volume
26:
27: volume m3d-quota
28: type features/quota
29: option limit-set 
/1Gg:1GB,/10Gg:10GB,/5Gg:5GB,/512Mb:512MB,/dav/256MB:256MB,/dav/1GB:1GB,/dav/5GB:5GB,/dav/10GB:10GB,/cifs/256Mb:256MB,/dav/128mb:128MB,/dav/192mb:192MB,/cifs/192mb:192MB
30: option timeout 0
31: subvolumes m3d-dht
32: end-volume
33:
34: volume m3d-write-behind
35: type performance/write-behind
36: subvolumes m3d-quota
37: end-volume
38:
39: volume m3d-read-ahead
40: type performance/read-ahead
41: subvolumes m3d-write-behind
42: end-volume
43:
44: volume m3d-io-cache
45: type performance/io-cache
46: subvolumes m3d-read-ahead
47: end-volume
48:
49: volume m3d-quick-read
50: type performance/quick-read
51: subvolumes m3d-io-cache
52: end-volume
53:
54: volume m3d-stat-prefetch
55: type performance/stat-prefetch
56: subvolumes m3d-quick-read
57: end-volume
58:
59: volume m3d
60: type debug/io-stats
61: option latency-measurement off
62: option count-fop-hits off
63: subvolumes m3d-stat-prefetch
64: end-volume

+------------------------------------------------------------------------------+
[2012-04-19 16:37:12.729422] W [dict.c:1153:data_to_str] 
(-->/usr/lib/libglusterfs.so.0(+0x18ddc) [0x7ff9c3025ddc] 
(-->/usr/lib/libglusterfs.so.0(+0x18e6a) [0x7ff9c3025e6a] 
(-->/usr/lib/glusterfs/3.2.6/xlator/performance/io-cache.so(reconfigure+0x443) 
[0x7ff9bf0f7913]))) 0-dict: data is NULL
[2012-04-19 16:37:12.729481] W [dict.c:1153:data_to_str] 
(-->/usr/lib/libglusterfs.so.0(+0x18ddc) [0x7ff9c3025ddc] 
(-->/usr/lib/libglusterfs.so.0(+0x18e6a) [0x7ff9c3025e6a] 
(-->/usr/lib/glusterfs/3.2.6/xlator/performance/io-cache.so(reconfigure+0x59d) 
[0x7ff9bf0f7a6d]))) 0-dict: data is NULL
[2012-04-19 16:37:12.729531] W [dict.c:1153:data_to_str] 
(-->/usr/lib/libglusterfs.so.0(+0x18ddc) [0x7ff9c3025ddc] 
(-->/usr/lib/libglusterfs.so.0(+0x18e6a) [0x7ff9c3025e6a] 
(-->/usr/lib/glusterfs/3.2.6/xlator/performance/io-cache.so(reconfigure+0x72f) 
[0x7ff9bf0f7bff]))) 0-dict: data is NULL
[2012-04-19 16:37:12.729580] W [dict.c:1153:data_to_str] 
(-->/usr/lib/libglusterfs.so.0(+0x18ddc) [0x7ff9c3025ddc] 
(-->/usr/lib/libglusterfs.so.0(+0x18e6a) [0x7ff9c3025e6a] 
(-->/usr/lib/glusterfs/3.2.6/xlator/performance/io-cache.so(reconfigure+0x74b) 
[0x7ff9bf0f7c1b]))) 0-dict: data is NULL
=== snip ===

Thank you in advance,
Regards
Hristo Hristov




More information about the Gluster-users mailing list