[Gluster-users] Quota problems with Gluster3.3b2

Daniel Pereira d.pereira at skillupjapan.co.jp
Tue Jan 24 01:20:57 UTC 2012


  Hi Saurabh,

  It seems like I fixed the problem.
  I realized I already had the latest UFO version from the git 
repository. Rolling back to the "stable" 3.3b2 version fixed the 
problem. So in the end the problem was being caused by the git version 
of Gaurav's UFO repository.

  All's fine now, thanks a lot for the direction and help during the 
process.

  Daniel

On 1/23/12 7:12 PM, Daniel Pereira wrote:
>  Hello Saurabh,
>
>  Thanks for you reply.
>  I can successfully create directories and files in the mount point, 
> manually, with both quota enabled and disabled. The output from the 
> gluster mount log files is stuff like:
>
> [2012-01-23 18:58:14.170140] I 
> [afr-common.c:1225:afr_launch_self_heal] 11-r2-replicate-4: 
> background  entry self-heal triggered. path: /test
> [2012-01-23 18:58:14.170597] I 
> [afr-self-heal-common.c:2022:afr_self_heal_completion_cbk] 
> 11-r2-replicate-4: background  entry self-heal completed on /test
> [2012-01-23 18:58:15.760197] I 
> [afr-common.c:1225:afr_launch_self_heal] 11-r2-replicate-4: 
> background  entry self-heal triggered. path: /test
> [2012-01-23 18:58:15.760651] I 
> [afr-self-heal-common.c:2022:afr_self_heal_completion_cbk] 
> 11-r2-replicate-4: background  entry self-heal completed on /test
> [2012-01-23 18:58:15.761178] I 
> [afr-common.c:1225:afr_launch_self_heal] 11-r2-replicate-4: 
> background  entry self-heal triggered. path: /tmp
> [2012-01-23 18:58:15.761820] I 
> [afr-self-heal-common.c:2022:afr_self_heal_completion_cbk] 
> 11-r2-replicate-4: background  entry self-heal completed on /tmp
> [2012-01-23 18:58:16.720382] I 
> [afr-common.c:1225:afr_launch_self_heal] 11-r2-replicate-4: 
> background  entry self-heal triggered. path: /tmp
>
>  Meanwhile, no logs were output in any /var/log file related to 
> gluster or UFO.
>  I will try to install the latest stable GlusterFS with UFO. It 
> baffles me what can be causing any problem ... I'll let you know how 
> it goes.
>
>  Thanks for you help,
>  Daniel
>
> On 1/23/12 6:37 PM, Saurabh Jain wrote:
>> Hello Daniel,
>>
>>
>>   I did try again with 3.3beta2 and gluster-object(UFO) to upload a 
>> file while quota is enabled and I am able to do it successfully,
>>
>>
>> Requesting you again to do the following,
>> 1. On the mount point try create some files/directories manually 
>> while quota enabled/disabled.
>> 2. share the information from /var/log/messages and glusterfs logs 
>> from  the glusterfs mount.
>> 3. It will be advisable to try things on the latest glusterfs code 
>> and try to use UFO on top of that.
>>
>> Thanks,
>> Saurabh
>> ________________________________________
>> From: Daniel Pereira [d.pereira at skillupjapan.co.jp]
>> Sent: Friday, January 20, 2012 12:02 PM
>> To: Saurabh Jain
>> Cc: gluster-users at gluster.org
>> Subject: Re: [Gluster-users] Quota problems with Gluster3.3b2
>>
>>    Hello Saurabh,
>>
>>    Sorry for the long delay getting back to you, and thank you for
>> replying to me!
>>
>>    To reproduce this, I'm doing a simple st command like, I'm doing
>> nothing in parallel:
>> st -A http://IP:80/auth/v1.0 -U r2:user -K pass upload test manual.txt
>>
>>    If I do
>> /usr/local/sbin/gluster volume quota r2 disable
>>    the command succeeds. But if I do:
>> /usr/local/sbin/gluster volume quota r2 enable
>>    the command hangs with the permission error that I described earlier.
>>
>>    My volume info:
>> # gluster volume info r2
>>
>> Volume Name: r2
>> Type: Distributed-Replicate
>> Status: Started
>> Number of Bricks: 6 x 2 = 12
>> Transport-type: tcp
>> Bricks:
>> Brick1: 192.168.4.103:/gluster/disk1
>> Brick2: 192.168.4.103:/gluster/disk2
>> Brick3: 192.168.4.103:/gluster/disk3
>> Brick4: 192.168.4.103:/gluster/disk4
>> Brick5: 192.168.4.103:/gluster/disk5
>> Brick6: 192.168.4.103:/gluster/disk6
>> Brick7: 192.168.4.103:/gluster/disk7
>> Brick8: 192.168.4.103:/gluster/disk8
>> Brick9: 192.168.4.103:/gluster/disk9
>> Brick10: 192.168.4.103:/gluster/disk10
>> Brick11: 192.168.4.103:/gluster/disk11
>> Brick12: 192.168.4.103:/gluster/disk12
>> Options Reconfigured:
>> performance.cache-size: 6GB
>> cluster.stripe-block-size: 1MB
>> features.quota: on
>>
>>    Thanks in advance,
>> Daniel
>>
>> On 1/16/12 9:29 PM, Saurabh Jain wrote:
>>> Hello Daniel,
>>>
>>>      I am trying to reproduce the problem, meanwhile I request you 
>>> to update me with the "volume info" and the sequence of steps you 
>>> are trying. As, for me it didn't fail when quota is enabled. Also, 
>>> mention are you trying to run the operations in parallel.
>>>
>>>
>>> Thanks,
>>> Saurabh
>>>
>>>     Hi everyone,
>>>
>>>     I'm playing with Gluster3.3b2, and everything is working fine when
>>> uploading stuff through swift. However, when I enable quotas on 
>>> Gluster,
>>> I randomly get permission errors. Sometimes I can upload files, most
>>> times I can't.
>>>
>>>     I'm mounting the partitions with the acl flag, I've tried wiping 
>>> out
>>> everything and starting from scratch, same result. As soon as I disable
>>> quotas everything works great. I don't even need to add any limit-usage
>>> for the errors to crop up.
>>>
>>>     Any idea?
>>>
>>> Daniel
>>>
>>>
>>>
>>>     Relevant info:
>>>
>>> =========================
>>>     To enable quotas I use the following commands:
>>>
>>> # /usr/local/sbin/gluster volume quota r2 enable
>>> Enabling quota has been successful
>>>
>>> # /usr/local/sbin/gluster volume quota r2 list
>>> Limit not set on any directory
>>>
>>> # /usr/local/sbin/gluster volume quota r2 limit-usage /test 10GB
>>> limit set on /test
>>>
>>> # /usr/local/sbin/gluster volume quota r2 list
>>>        path          limit_set         size
>>> ---------------------------------------------------------------------------------- 
>>>
>>> /test                      10GB               88.0KB
>>>
>>> # /usr/local/sbin/gluster volume quota r2 disable
>>> Disabling quota will delete all the quota configuration. Do you want to
>>> continue? (y/n) y
>>> Disabling quota has been successful
>>>
>>> =========================
>>>     Directory listing:
>>> ls -la *
>>> test:
>>> total 184
>>> drwxrwxrwx 2 user user 24576 Jan 13 12:07 .
>>> drwxrwxrwx 5 user user 24576 Jan 13 12:03 ..
>>> -rw------- 1 user user 82735 Jan 13 12:07 manual.txt
>>>
>>> tmp:
>>> total 96
>>> drwxrwxrwx 2 user user 24576 Jan 13 12:07 .
>>> drwxrwxrwx 5 user user 24576 Jan 13 12:03 ..
>>>
>>> ==========================
>>> Gluster logs:
>>> Unsuccessful write:
>>>
>>> [2012-01-13 12:06:27.97140] I [afr-common.c:1225:afr_launch_self_heal]
>>> 0-r2-replicate-4: background  entry self-heal triggered. path: /tmp
>>> [2012-01-13 12:06:27.97704] I
>>> [afr-self-heal-common.c:2022:afr_self_heal_completion_cbk]
>>> 0-r2-replicate-4: background  entry self-heal completed on /tmp
>>> [2012-01-13 12:06:27.102813] I [afr-common.c:1225:afr_launch_self_heal]
>>> 0-r2-replicate-4: background  entry self-heal triggered. path: /test
>>> [2012-01-13 12:06:27.103199] I
>>> [afr-self-heal-common.c:2022:afr_self_heal_completion_cbk]
>>> 0-r2-replicate-4: background  entry self-heal completed on /test
>>> [2012-01-13 12:06:27.106876] E
>>> [stat-prefetch.c:695:sp_remove_caches_from_all_fds_opened]
>>> (-->/usr/local/lib/glusterfs/3.3beta2/xlator/mount/fuse.so(fuse_setxattr_resume+0x148) 
>>>
>>> [0x2acd7b862118]
>>> (-->/usr/local/lib/glusterfs/3.3beta2/xlator/debug/io-stats.so(io_stats_setxattr+0x15f) 
>>>
>>> [0x2aaaae8cf71f]
>>> (-->/usr/local/lib/glusterfs/3.3beta2/xlator/performance/stat-prefetch.so(sp_setxattr+0x6c) 
>>>
>>> [0x2aaaae6bc3fc]))) 0-r2-stat-prefetch: invalid argument: inode
>>> [2012-01-13 12:06:27.164168] I
>>> [client3_1-fops.c:1999:client3_1_rename_cbk] 0-r2-client-8: remote
>>> operation failed: Permission denied
>>> [2012-01-13 12:06:27.164211] I
>>> [client3_1-fops.c:1999:client3_1_rename_cbk] 0-r2-client-9: remote
>>> operation failed: Permission denied
>>> [2012-01-13 12:06:27.164227] W [dht-rename.c:480:dht_rename_cbk]
>>> 0-r2-dht: /tmp/tmpyhBbAD: rename on r2-replicate-4 failed (Permission
>>> denied)
>>> [2012-01-13 12:06:27.164855] W [fuse-bridge.c:1351:fuse_rename_cbk]
>>> 0-glusterfs-fuse: 706: /tmp/tmpyhBbAD ->   /test/manual.txt =>   -1
>>> (Permission denied)
>>> [2012-01-13 12:06:27.166115] I
>>> [client3_1-fops.c:1999:client3_1_rename_cbk] 0-r2-client-8: remote
>>> operation failed: Permission denied
>>> [2012-01-13 12:06:27.166142] I
>>> [client3_1-fops.c:1999:client3_1_rename_cbk] 0-r2-client-9: remote
>>> operation failed: Permission denied
>>> [2012-01-13 12:06:27.166156] W [dht-rename.c:480:dht_rename_cbk]
>>> 0-r2-dht: /tmp/tmpyhBbAD: rename on r2-replicate-4 failed (Permission
>>> denied)
>>> [2012-01-13 12:06:27.166763] W [fuse-bridge.c:1351:fuse_rename_cbk]
>>> 0-glusterfs-fuse: 707: /tmp/tmpyhBbAD ->   /test/manual.txt =>   -1
>>> (Permission denied)
>>>
>>> Successful write:
>>> [2012-01-13 12:07:02.49562] I [afr-common.c:1225:afr_launch_self_heal]
>>> 0-r2-replicate-4: background  entry self-heal triggered. path: /test
>>> [2012-01-13 12:07:02.50013] I
>>> [afr-self-heal-common.c:2022:afr_self_heal_completion_cbk]
>>> 0-r2-replicate-4: background  entry self-heal completed on /test
>>> [2012-01-13 12:07:02.52255] I [afr-common.c:1225:afr_launch_self_heal]
>>> 0-r2-replicate-4: background  entry self-heal triggered. path: /tmp
>>> [2012-01-13 12:07:02.52832] I
>>> [afr-self-heal-common.c:2022:afr_self_heal_completion_cbk]
>>> 0-r2-replicate-4: background  entry self-heal completed on /tmp
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>
>>
>>
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
>
>




More information about the Gluster-users mailing list