[Gluster-users] Errors in quota-crawl.log

Ryan Clough ryan.clough at dsic.com
Mon Jun 8 17:51:25 UTC 2015


I have submitted a BZ:
https://bugzilla.redhat.com/show_bug.cgi?id=1229422

___________________________________________
¯\_(ツ)_/¯
Ryan Clough
Information Systems
Decision Sciences International Corporation
<http://www.decisionsciencescorp.com/>
<http://www.decisionsciencescorp.com/>

On Wed, Apr 8, 2015 at 1:49 AM, Sachin Pandit <spandit at redhat.com> wrote:

>
> Please find the comments inline.
>
> ----- Original Message -----
> > From: "Ryan Clough" <ryan.clough at dsic.com>
> > To: "gluster-users" <gluster-users at gluster.org>
> > Sent: Wednesday, April 8, 2015 9:59:55 AM
> > Subject: Re: [Gluster-users] Errors in quota-crawl.log
> >
> > No takers? Seems like quota is working but when I see permission denied
> > warnings it makes me wonder if the quota calculations are going to be
> > accurate. Any help would be much appreciated.
> >
> > Ryan Clough
> > Information Systems
> > Decision Sciences International Corporation
> >
> > On Thu, Apr 2, 2015 at 12:43 PM, Ryan Clough < ryan.clough at dsic.com >
> wrote:
> >
> >
> >
> > We are running the following operating system:
> > Scientific Linux release 6.6 (Carbon)
> >
> > With the following kernel:
> > 2.6.32-504.3.3.el6.x86_64
> >
> > We are using the following version of Glusterfs:
> > glusterfs-libs-3.6.2-1.el6.x86_64
> > glusterfs-3.6.2-1.el6.x86_64
> > glusterfs-cli-3.6.2-1.el6.x86_64
> > glusterfs-api-3.6.2-1.el6.x86_64
> > glusterfs-fuse-3.6.2-1.el6.x86_64
> > glusterfs-server-3.6.2-1.el6.x86_64
> >
> > Here is the current configuration of our 2 node distribute only cluster:
> > Volume Name: export_volume
> > Type: Distribute
> > Volume ID: c74cc970-31e2-4924-a244-4c70d958dadb
> > Status: Started
> > Number of Bricks: 2
> > Transport-type: tcp
> > Bricks:
> > Brick1: hgluster01:/gluster_data
> > Brick2: hgluster02:/gluster_data
> > Options Reconfigured:
> > performance.cache-size: 1GB
> > diagnostics.brick-log-level: ERROR
> > performance.stat-prefetch: on
> > performance.write-behind: on
> > performance.flush-behind: on
> > features.quota-deem-statfs: on
> > performance.quick-read: off
> > performance.client-io-threads: on
> > performance.read-ahead: on
> > performance.io-thread-count: 24
> > features.quota: on
> > cluster.eager-lock: on
> > nfs.disable: on
> > auth.allow: 192.168.10.*,10.0.10.*,10.8.0.*,10.2.0.*,10.0.60.*
> > server.allow-insecure: on
> > performance.write-behind-window-size: 1MB
> > network.ping-timeout: 60
> > features.quota-timeout: 0
> > performance.io-cache: off
> > server.root-squash: on
> > performance.readdir-ahead: on
> >
> > Here is the status of the nodes:
> > Status of volume: export_volume
> > Gluster process Port Online Pid
> >
> ------------------------------------------------------------------------------
> > Brick hgluster01:/gluster_data 49152 Y 7370
> > Brick hgluster02:/gluster_data 49152 Y 17868
> > Quota Daemon on localhost N/A Y 2051
> > Quota Daemon on hgluster02.red.dsic.com N/A Y 6691
> >
> > Task Status of Volume export_volume
> >
> ------------------------------------------------------------------------------
> > There are no active volume tasks
> >
> > I have just turned quota on and was watching the quota-crawl.log and see
> a
> > bunch of these type of messages:
> >
> > [2015-04-02 19:23:01.540692] W [fuse-bridge.c:483:fuse_entry_cbk]
> > 0-glusterfs-fuse: 2338683: LOOKUP() /\ => -1 (Permission denied)
> >
> > [2015-04-02 19:23:01.543565] W
> [client-rpc-fops.c:2766:client3_3_lookup_cbk]
> > 0-export_volume-client-1: remote operation failed: Permission denied.
> Path:
> > /\ (00000000-0000-0000-0000-000000000000)
> >
> > [2015-04-02 17:58:14.090556] W
> [client-rpc-fops.c:2766:client3_3_lookup_cbk]
> > 0-export_volume-client-0: remote operation failed: Permission denied.
> Path:
> > /\ (00000000-0000-0000-0000-000000000000)
> >
> > Should I be worried about this and how do I go about fixing the
> permissions?
> > Is this a bug and should it be reported?
>
> Hi Ryan,
>
> Apologies for the late reply. Looking at the description of the problem
> I don't think there will be any problem. I think its better if we track
> this problem using a bug. If you have already raised a bug then please
> do provide us a bug-id, or else we will raise a new bug.
>
> I have one question: Looking at the path /\ , do you have a directory
> with similar path, as we can see accessing that has failed?
>
> Thanks,
> Sachin.
>
>
>
> >
> > Thanks, in advance, for your time to help me.
> > Ryan Clough
> > Information Systems
> > Decision Sciences International Corporation
> >
> >
> > This email and its contents are confidential. If you are not the intended
> > recipient, please do not disclose or use the information within this
> email
> > or its attachments. If you have received this email in error, please
> report
> > the error to the sender by return email and delete this communication
> from
> > your records.
> >
> > _______________________________________________
> > Gluster-users mailing list
> > Gluster-users at gluster.org
> > http://www.gluster.org/mailman/listinfo/gluster-users
>

-- 
This email and its contents are confidential. If you are not the intended 
recipient, please do not disclose or use the information within this email 
or its attachments. If you have received this email in error, please report 
the error to the sender by return email and delete this communication from 
your records.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20150608/4f8bb368/attachment.html>


More information about the Gluster-users mailing list