[Gluster-users] Quota trouble
Atin Mukherjee
amukherj at redhat.com
Tue Apr 21 09:27:33 UTC 2015
On 04/21/2015 02:47 PM, Avra Sengupta wrote:
> In the logs I see, glusterd_lock() being used. This api is called only
> in older versions of gluster or if you have a cluster version is less
> then 30600. So along with the version of glusterfs used, could you also
> let us know what is the cluster version. You can check it as
> "operating-version" in /var/lib/glusterd/glusterd.info file.
Additionally please check whether concurrent volume operations were
triggered by checking .cmd_log_history across all the nodes, if so, this
could result into stale locks.
~Atin
>
> Regards,
> Avra
>
> On 04/21/2015 02:34 PM, Avra Sengupta wrote:
>> Hi Kondo,
>>
>> Can u also mention the version of gluster you are using.
>>
>> +Adding gluster-users
>>
>> Regards,
>> Avra
>> On 04/21/2015 02:27 PM, Avra Sengupta wrote:
>>> Hi Kondo,
>>>
>>> I went through the gluster13 logs you had sent. Seems like something
>>> on that machine is holding the lock and is not releasing it. There
>>> are ways in which the system might end up in this scenario. I will
>>> try and explain the same with an example.
>>>
>>> Let's say I have gluster 11, gluster12, and gluster 13 in my cluster.
>>> I initiate a command from gluster11. Now the first thing that command
>>> does is, it holds a lock on all the nodes in the cluster on behalf of
>>> gluster11. Once the command does what's intended, it's last act
>>> before ending is to unlock all the nodes in the cluster. Now, only
>>> the node that has issued the lock, can issue the unlock.
>>>
>>> In your case what has happened is some command after successfully
>>> acquired the lock on gluster13. Now the node which initiated the
>>> command, went down or glusterd on that node went down before it could
>>> complete the command and it never got to send the unlock to gluster13.
>>>
>>> There's a workaround to it. You can restart glusterd on gluster13 and
>>> it should work fine.
>>>
>>> Regards,
>>> Avra
>>>
>>> On 04/20/2015 06:55 PM, kenji kondo wrote:
>>>> Hello Vijay,
>>>> Maybe this is very rare case. But is there any idea?
>>>>
>>>> Thanks,
>>>> Kondo
>>>>
>>>> 2015-04-15 9:47 GMT+09:00 Vijaikumar M <vmallika at redhat.com
>>>> <mailto:vmallika at redhat.com>>:
>>>>
>>>> Adding Avra...
>>>>
>>>> Thanks,
>>>> Vijay
>>>>
>>>>
>>>> -------- Forwarded Message --------
>>>> Subject: Re: [Gluster-users] Quota trouble
>>>> Date: Wed, 15 Apr 2015 00:27:26 +0900
>>>> From: kenji kondo <kkay.jp at gmail.com>
>>>> <mailto:kkay.jp at gmail.com>
>>>> To: Vijaikumar M <vmallika at redhat.com>
>>>> <mailto:vmallika at redhat.com>
>>>>
>>>>
>>>>
>>>> Hi Vijay,
>>>>
>>>> Thanks for your comments.
>>>>
>>>>
>>>> The lock error occurs at one server it's called "gluster13".
>>>>
>>>> In the gluster13, I tried to create new volume and start quota.
>>>> But it failed as below,
>>>>
>>>>
>>>> In both host gluster10 and gluster13, ran below
>>>>
>>>> $ sudo mkdir /export11/testbrick1
>>>>
>>>> $ sudo mkdir /export11/testbrick2
>>>>
>>>> In gluster13, ran below
>>>>
>>>> $ sudo /usr/sbin/gluster volume create testvol2
>>>> gluster13:/export11/testbrick1 gluster13:/export11/testbrick2
>>>>
>>>> volume create: testvol2: failed: Locking failed on gluster13.
>>>> Please check log file for details.
>>>>
>>>> $ sudo /usr/sbin/gluster volume create testvol2
>>>> gluster10:/export11/testbrick1 gluster10:/export11/testbrick2
>>>>
>>>> volume create: testvol2: failed: Locking failed on gluster13.
>>>> Please check log file for details.
>>>>
>>>> But I recived error messages above.
>>>>
>>>> On the other hand, in gluster10, it was success.
>>>>
>>>> Again, in gluster13, I tried to run quota, but it failed as below.
>>>>
>>>> $ sudo /usr/sbin/gluster volume quota testvol2 enable
>>>>
>>>> quota command failed : Locking failed on gluster13. Please check
>>>> log file for details.
>>>>
>>>>
>>>> Could you find attached?
>>>>
>>>> We can find error messages in the log of gluster13.
>>>>
>>>>
>>>> Best regards,
>>>>
>>>> Kondo
>>>>
>>>>
>>>>
>>>> 2015-04-13 19:38 GMT+09:00 Vijaikumar M <vmallika at redhat.com
>>>> <mailto:vmallika at redhat.com>>:
>>>>
>>>> Hi Kondo,
>>>>
>>>> The lock error you mentioned is because, another operation
>>>> is still running on the volume and hence not able to acquire
>>>> the lock.
>>>> This is bug of not displaying proper error message, we are
>>>> working on fixing this issue.
>>>>
>>>> I was not able to find any clue on why quotad is not running.
>>>>
>>>> I wanted to check, if we can manually start quotad something
>>>> like below:
>>>>
>>>> # /usr/local/sbin/glusterfs -s localhost --volfile-id
>>>> gluster/quotad -p /var/lib/glusterd/quotad/run/quotad.pid -l
>>>> /var/log/glusterfs/quotad.log -S
>>>> /var/run/gluster/myquotad.socket --xlator-option
>>>> *replicate*.data-self-heal=off --xlator-option
>>>> *replicate*.metadata-self-heal=off --xlator-option
>>>> *replicate*.entry-self-heal=off
>>>>
>>>> or
>>>>
>>>> create a new temporary volume, and enable quota on this
>>>> volume. (quotad will be same for all the volume which has
>>>> quota enabled)
>>>>
>>>>
>>>> Thanks,
>>>> Vijay
>>>>
>>>>
>>>> On Sunday 12 April 2015 07:05 PM, kenji kondo wrote:
>>>>> Hi Vijay,
>>>>>
>>>>> Thank you for your suggestion. But I'm sorry, it's
>>>>> difficult to access from outside because my glusterfs
>>>>> system is closed.
>>>>> I will give up if there is no clue information in attached
>>>>> log.
>>>>>
>>>>> Best regards,
>>>>> Kondo
>>>>>
>>>>>
>>>>> 2015-04-09 15:40 GMT+09:00 Vijaikumar M
>>>>> <vmallika at redhat.com <mailto:vmallika at redhat.com>>:
>>>>>
>>>>>
>>>>>
>>>>> On Thursday 09 April 2015 11:58 AM, Vijaikumar M wrote:
>>>>>>
>>>>>>
>>>>>> On Wednesday 08 April 2015 09:57 PM, kenji kondo wrote:
>>>>>>> Hi Vijay,
>>>>>>>
>>>>>>> I checked the all of the setting.
>>>>>>> The all are 'features.quota=on' when I set quota
>>>>>>> enable and the all are 'features.quota=off' when I
>>>>>>> set quota disable.
>>>>>>>
>>>>>>> But I could find new issue.
>>>>>>> When I checked a volume status for all server, in one
>>>>>>> of the servers I received the error message as below.
>>>>>>>
>>>>>>> $ sudo /usr/sbin/gluster volume status testvol
>>>>>>> Locking failed on gluster13. Please check log file
>>>>>>> for details.
>>>>>>>
>>>>>>> In etc-glusterfs-glusterd.vol.log of problem server,
>>>>>>> I found error messages as below.
>>>>>>> [2015-04-08 08:40:04.782644] I
>>>>>>> [mem-pool.c:545:mem_pool_destroy] 0-management:
>>>>>>> size=588 max=0 total=0
>>>>>>> [2015-04-08 08:40:04.782685] I
>>>>>>> [mem-pool.c:545:mem_pool_destroy] 0-management:
>>>>>>> size=124 max=0 total=0
>>>>>>> [2015-04-08 08:40:04.782848] W
>>>>>>> [socket.c:611:__socket_rwv] 0-management: readv on
>>>>>>> /var/run/14b05cd492843e6e288e290c2d63093c.socket
>>>>>>> failed (Invalid arguments)
>>>>>>> [2015-04-08 08:40:04.805407] I [MSGID: 106006]
>>>>>>> [glusterd-handler.c:4257:__glusterd_nodesvc_rpc_notify]
>>>>>>> 0-management: nfs has disconnected from glusterd.
>>>>>>> [2015-04-08 08:43:02.439001] I
>>>>>>>
>>>>>>> [glusterd-handler.c:3803:__glusterd_handle_status_volume]
>>>>>>> 0-management: Received status volume req for volume
>>>>>>> testvol
>>>>>>> [2015-04-08 08:43:02.460581] E
>>>>>>> [glusterd-utils.c:148:glusterd_lock] 0-management:
>>>>>>> Unable to get lock for uuid:
>>>>>>> 03a32bce-ec63-4dc3-a287-4901a55dd8c9, lock held by:
>>>>>>> 03a32bce-ec63-4dc3-a287-4901a55dd8c9
>>>>>>> [2015-04-08 08:43:02.460632] E
>>>>>>> [glusterd-op-sm.c:6584:glusterd_op_sm] 0-management:
>>>>>>> handler returned: -1
>>>>>>> [2015-04-08 08:43:02.460654] E
>>>>>>> [glusterd-syncop.c:105:gd_collate_errors] 0-: Locking
>>>>>>> failed on gluster13. Please check log file for details.
>>>>>>> [2015-04-08 08:43:02.461409] E
>>>>>>> [glusterd-syncop.c:1602:gd_sync_task_begin]
>>>>>>> 0-management: Locking Peers Failed.
>>>>>>> [2015-04-08 08:43:43.698168] I
>>>>>>>
>>>>>>> [glusterd-handler.c:3803:__glusterd_handle_status_volume]
>>>>>>> 0-management: Received status volume req for volume
>>>>>>> testvol
>>>>>>> [2015-04-08 08:43:43.698813] E
>>>>>>> [glusterd-utils.c:148:glusterd_lock] 0-management:
>>>>>>> Unable to get lock for uuid:
>>>>>>> 03a32bce-ec63-4dc3-a287-4901a55dd8c9, lock held by:
>>>>>>> 03a32bce-ec63-4dc3-a287-4901a55dd8c9
>>>>>>> [2015-04-08 08:43:43.698898] E
>>>>>>> [glusterd-op-sm.c:6584:glusterd_op_sm] 0-management:
>>>>>>> handler returned: -1
>>>>>>> [2015-04-08 08:43:43.698994] E
>>>>>>> [glusterd-syncop.c:105:gd_collate_errors] 0-: Locking
>>>>>>> failed on gluster13. Please check log file for details.
>>>>>>> [2015-04-08 08:43:43.702126] E
>>>>>>> [glusterd-syncop.c:1602:gd_sync_task_begin]
>>>>>>> 0-management: Locking Peers Failed.
>>>>>>> [2015-04-08 08:44:01.277139] I
>>>>>>>
>>>>>>> [glusterd-handler.c:3803:__glusterd_handle_status_volume]
>>>>>>> 0-management: Received status volume req for volume
>>>>>>> testvol
>>>>>>> [2015-04-08 08:44:01.277560] E
>>>>>>> [glusterd-utils.c:148:glusterd_lock] 0-management:
>>>>>>> Unable to get lock for uuid:
>>>>>>> 03a32bce-ec63-4dc3-a287-4901a55dd8c9, lock held by:
>>>>>>> 03a32bce-ec63-4dc3-a287-4901a55dd8c9
>>>>>>> [2015-04-08 08:44:01.277639] E
>>>>>>> [glusterd-op-sm.c:6584:glusterd_op_sm] 0-management:
>>>>>>> handler returned: -1
>>>>>>> [2015-04-08 08:44:01.277676] E
>>>>>>> [glusterd-syncop.c:105:gd_collate_errors] 0-: Locking
>>>>>>> failed on gluster13. Please check log file for details.
>>>>>>> [2015-04-08 08:44:01.281514] E
>>>>>>> [glusterd-syncop.c:1602:gd_sync_task_begin]
>>>>>>> 0-management: Locking Peers Failed.
>>>>>>> [2015-04-08 08:45:42.599796] I
>>>>>>>
>>>>>>> [glusterd-handler.c:3803:__glusterd_handle_status_volume]
>>>>>>> 0-management: Received status volume req for volume
>>>>>>> testvol
>>>>>>> [2015-04-08 08:45:42.600343] E
>>>>>>> [glusterd-utils.c:148:glusterd_lock] 0-management:
>>>>>>> Unable to get lock for uuid:
>>>>>>> 03a32bce-ec63-4dc3-a287-4901a55dd8c9, lock held by:
>>>>>>> 03a32bce-ec63-4dc3-a287-4901a55dd8c9
>>>>>>> [2015-04-08 08:45:42.600417] E
>>>>>>> [glusterd-op-sm.c:6584:glusterd_op_sm] 0-management:
>>>>>>> handler returned: -1
>>>>>>> [2015-04-08 08:45:42.600482] E
>>>>>>> [glusterd-syncop.c:105:gd_collate_errors] 0-: Locking
>>>>>>> failed on gluster13. Please check log file for details.
>>>>>>> [2015-04-08 08:45:42.601039] E
>>>>>>> [glusterd-syncop.c:1602:gd_sync_task_begin]
>>>>>>> 0-management: Locking Peers Failed.
>>>>>>>
>>>>>>> Does this situation relate to my quota problems?
>>>>>>>
>>>>>>
>>>>>> This is a glusterd different issue. Can we get the
>>>>>> glusterd logs from gluster13?
>>>>>> Can get access to these machines, so that we can debug
>>>>>> live?
>>>>>>
>>>>>> Thanks,
>>>>>> Vijay
>>>>>>
>>>>> Regarding quota issue, quota feature is enabled
>>>>> successfully. I am wondering why quotad is not started.
>>>>> If we get the access to the machine, it will be easier
>>>>> to debug the issue.
>>>>>
>>>>> Thanks,
>>>>> Vijay
>>>>>
>>>>>
>>>>>>>
>>>>>>> Best regards,
>>>>>>> Kondo
>>>>>>>
>>>>>>>
>>>>>>> 2015-04-08 15:14 GMT+09:00 Vijaikumar M
>>>>>>> <vmallika at redhat.com <mailto:vmallika at redhat.com>>:
>>>>>>>
>>>>>>> Hi Kondo,
>>>>>>>
>>>>>>> I suspect, in one of the node quota feature is
>>>>>>> not set for some reason and hence quotad is not
>>>>>>> starting.
>>>>>>>
>>>>>>> On all the nodes can you check if below option is
>>>>>>> set to 'on'
>>>>>>>
>>>>>>> # grep quota /var/lib/glusterd/vols/<volname>/info
>>>>>>> features.quota=on
>>>>>>>
>>>>>>>
>>>>>>> Also can I get brick logs from all the nodes?
>>>>>>>
>>>>>>> Also can you create a temporary volume and enable
>>>>>>> the quota here and see if see quota works fine
>>>>>>> with this volume?
>>>>>>>
>>>>>>>
>>>>>>> Thanks,
>>>>>>> Vijay
>>>>>>>
>>>>>>> On Tuesday 07 April 2015 08:34 PM, kenji kondo
>>>>>>> wrote:
>>>>>>>> Hi Vijay,
>>>>>>>>
>>>>>>>> Could you find attached?
>>>>>>>> I got logs of server and client.
>>>>>>>> As same as before, I could not create a file
>>>>>>>> after quota usage-limit setting.
>>>>>>>>
>>>>>>>> Best regards,
>>>>>>>> Kondo
>>>>>>>>
>>>>>>>>
>>>>>>>> 2015-04-07 18:34 GMT+09:00 Vijaikumar M
>>>>>>>> <vmallika at redhat.com <mailto:vmallika at redhat.com>>:
>>>>>>>>
>>>>>>>> Hi Konda,
>>>>>>>>
>>>>>>>> Can we get all the log files?
>>>>>>>>
>>>>>>>> # gluster volume quota <volname> disable
>>>>>>>> # gluster volume quota <volname> enable
>>>>>>>>
>>>>>>>>
>>>>>>>> Now copy all the logs files.
>>>>>>>>
>>>>>>>> Thanks,
>>>>>>>> Vijay
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> On Tuesday 07 April 2015 12:39 PM, K.Kondo
>>>>>>>> wrote:
>>>>>>>>> Thank you very much ! Vijay
>>>>>>>>> I want to use a quota because each volume
>>>>>>>>> became too big.
>>>>>>>>>
>>>>>>>>> Best regard
>>>>>>>>> Kondo
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> 2015/04/07 15:18、Vijaikumar M
>>>>>>>>> <vmallika at redhat.com
>>>>>>>>> <mailto:vmallika at redhat.com>> のメッセージ:
>>>>>>>>>
>>>>>>>>>> Hi Kondo,
>>>>>>>>>>
>>>>>>>>>> I couldn’t find clue from the logs. I will
>>>>>>>>>> discuss about this issue with my
>>>>>>>>>> colleagues today.
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> Thanks,
>>>>>>>>>> Vijay
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> On Monday 06 April 2015 10:56 PM, kenji
>>>>>>>>>> kondo wrote:
>>>>>>>>>>> Hello Vijay,
>>>>>>>>>>> Is there something idea for this?
>>>>>>>>>>> Best regards,
>>>>>>>>>>> Kondo
>>>>>>>>>>>
>>>>>>>>>>> 2015-03-31 22:46 GMT+09:00 kenji kondo
>>>>>>>>>>> <kkay.jp at gmail.com
>>>>>>>>>>> <mailto:kkay.jp at gmail.com>>:
>>>>>>>>>>>
>>>>>>>>>>> Hi Vijay,
>>>>>>>>>>>
>>>>>>>>>>> I'm sorry for late reply.
>>>>>>>>>>> I could get the debug mode log as
>>>>>>>>>>> attached.
>>>>>>>>>>> In this test, unfortunately the quota
>>>>>>>>>>> did not work as same as before.
>>>>>>>>>>>
>>>>>>>>>>> Could you find the cause of my problem?
>>>>>>>>>>>
>>>>>>>>>>> Best regards,
>>>>>>>>>>> Kondo
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> 2015-03-25 17:20 GMT+09:00 Vijaikumar
>>>>>>>>>>> M <vmallika at redhat.com
>>>>>>>>>>> <mailto:vmallika at redhat.com>>:
>>>>>>>>>>>
>>>>>>>>>>> Hi Kondo,
>>>>>>>>>>>
>>>>>>>>>>> For some reason quota enable was
>>>>>>>>>>> not successful. We may have
>>>>>>>>>>> re-try enabling quota.
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> Thanks,
>>>>>>>>>>> Vijay
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> On Tuesday 24 March 2015 07:08
>>>>>>>>>>> PM, kenji kondo wrote:
>>>>>>>>>>>> Hi Vijay,
>>>>>>>>>>>> Thanks for your checking.
>>>>>>>>>>>> Unfortunately, currently I can't
>>>>>>>>>>>> stop the service because many
>>>>>>>>>>>> users are using.
>>>>>>>>>>>> But, I want to know this cause
>>>>>>>>>>>> of this trouble, so I will plan
>>>>>>>>>>>> to stop. Please wait to get the
>>>>>>>>>>>> log.
>>>>>>>>>>>>
>>>>>>>>>>>> Best regards,
>>>>>>>>>>>> Kondo
>>>>>>>>>>>>
>>>>>>>>>>>> 2015-03-24 17:01 GMT+09:00
>>>>>>>>>>>> Vijaikumar M
>>>>>>>>>>>> <vmallika at redhat.com
>>>>>>>>>>>> <mailto:vmallika at redhat.com>>:
>>>>>>>>>>>>
>>>>>>>>>>>> Hi Kondo,
>>>>>>>>>>>>
>>>>>>>>>>>> I couldn't find much clue in
>>>>>>>>>>>> the glusterd logs, other
>>>>>>>>>>>> than the error message you
>>>>>>>>>>>> mentioned below.
>>>>>>>>>>>> Can you try disabling and
>>>>>>>>>>>> enabling the quota again and
>>>>>>>>>>>> see if this start quotad?
>>>>>>>>>>>>
>>>>>>>>>>>> Try below command:
>>>>>>>>>>>> # gluster volume quota
>>>>>>>>>>>> <volname> disable
>>>>>>>>>>>>
>>>>>>>>>>>> wait for all quota process
>>>>>>>>>>>> to terminate
>>>>>>>>>>>> #ps -ef | quota
>>>>>>>>>>>>
>>>>>>>>>>>> # service glusterd stop
>>>>>>>>>>>> # glusterd -LDEBUG
>>>>>>>>>>>> # gluster volume quota
>>>>>>>>>>>> <volname> enable
>>>>>>>>>>>>
>>>>>>>>>>>> Now verify if quotad is running
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> Thanks,
>>>>>>>>>>>> Vijay
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> On Monday 23 March 2015
>>>>>>>>>>>> 06:24 PM, kenji kondo wrote:
>>>>>>>>>>>>> Hi Vijay,
>>>>>>>>>>>>> As you pointed out, the
>>>>>>>>>>>>> quotad is not running in
>>>>>>>>>>>>> the all of server.
>>>>>>>>>>>>> I checked the volume status
>>>>>>>>>>>>> and got following log.
>>>>>>>>>>>>>
>>>>>>>>>>>>> Quota Daemon on
>>>>>>>>>>>>> gluster25N/ANN/A
>>>>>>>>>>>>>
>>>>>>>>>>>>> So, I attached requested
>>>>>>>>>>>>> log
>>>>>>>>>>>>>
>>>>>>>>>>>>> 'etc-glusterfs-glusterd.vol.log'.
>>>>>>>>>>>>> The error messages can be
>>>>>>>>>>>>> found in the log.
>>>>>>>>>>>>>
>>>>>>>>>>>>> [2015-03-19
>>>>>>>>>>>>> 11:51:07.457697] E
>>>>>>>>>>>>>
>>>>>>>>>>>>> [glusterd-quota.c:1467:glusterd_op_stage_quota]
>>>>>>>>>>>>> 0-management: Quota is
>>>>>>>>>>>>> disabled, please enable quota
>>>>>>>>>>>>>
>>>>>>>>>>>>> If you want more some
>>>>>>>>>>>>> information to solve this
>>>>>>>>>>>>> problems, please ask me.
>>>>>>>>>>>>>
>>>>>>>>>>>>> Best regards,
>>>>>>>>>>>>> Kondo
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>> 2015-03-23 16:04 GMT+09:00
>>>>>>>>>>>>> Vijaikumar M
>>>>>>>>>>>>> <vmallika at redhat.com
>>>>>>>>>>>>> <mailto:vmallika at redhat.com>>:
>>>>>>>>>>>>>
>>>>>>>>>>>>> Hi Kondo,
>>>>>>>>>>>>>
>>>>>>>>>>>>> Can you please verify
>>>>>>>>>>>>> if quotad is running?
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>> root at rh1:~ *# gluster
>>>>>>>>>>>>> volume status*
>>>>>>>>>>>>> Status of volume: vol1
>>>>>>>>>>>>> Gluster process TCP
>>>>>>>>>>>>> Port RDMA Port Online
>>>>>>>>>>>>> Pid
>>>>>>>>>>>>>
>>>>>>>>>>>>> ------------------------------------------------------------------------------
>>>>>>>>>>>>>
>>>>>>>>>>>>> Brick
>>>>>>>>>>>>>
>>>>>>>>>>>>> rh1:/var/opt/gluster/bricks/b1/dir
>>>>>>>>>>>>> 49152 0 Y 1858
>>>>>>>>>>>>> NFS Server on localhost
>>>>>>>>>>>>> 2049 0 Y 1879
>>>>>>>>>>>>> *Quota Daemon on
>>>>>>>>>>>>> localhost N/A N/A
>>>>>>>>>>>>> Y 1914 **
>>>>>>>>>>>>> *
>>>>>>>>>>>>> Task Status of Volume vol1
>>>>>>>>>>>>>
>>>>>>>>>>>>> ------------------------------------------------------------------------------
>>>>>>>>>>>>>
>>>>>>>>>>>>> There are no active
>>>>>>>>>>>>> volume tasks
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>> root at rh1:~ # *ps -ef |
>>>>>>>>>>>>> grep quotad*
>>>>>>>>>>>>> root 1914 1 0
>>>>>>>>>>>>> 12:29 ? 00:00:00
>>>>>>>>>>>>> /usr/local/sbin/glusterfs
>>>>>>>>>>>>> -s localhost
>>>>>>>>>>>>> --volfile-id
>>>>>>>>>>>>> gluster/quotad -p
>>>>>>>>>>>>>
>>>>>>>>>>>>> /var/lib/glusterd/quotad/run/quotad.pid
>>>>>>>>>>>>> -l
>>>>>>>>>>>>>
>>>>>>>>>>>>> */var/log/glusterfs/quotad.log*-S
>>>>>>>>>>>>>
>>>>>>>>>>>>> /var/run/gluster/bb6ab82f70f555fd5c0e188fa4e09584.socket
>>>>>>>>>>>>> --xlator-option
>>>>>>>>>>>>>
>>>>>>>>>>>>> *replicate*.data-self-heal=off
>>>>>>>>>>>>> --xlator-option
>>>>>>>>>>>>>
>>>>>>>>>>>>> *replicate*.metadata-self-heal=off
>>>>>>>>>>>>> --xlator-option
>>>>>>>>>>>>>
>>>>>>>>>>>>> *replicate*.entry-self-heal=off
>>>>>>>>>>>>> root 1970 1511 0
>>>>>>>>>>>>> 12:31 pts/1 00:00:00
>>>>>>>>>>>>> grep quotad
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>> root at rh1:~ # *gluster
>>>>>>>>>>>>> volume info*
>>>>>>>>>>>>> Volume Name: vol1
>>>>>>>>>>>>> Type: Distribute
>>>>>>>>>>>>> Volume ID:
>>>>>>>>>>>>>
>>>>>>>>>>>>> a55519ec-65d1-4741-9ad3-f94020fc9b21
>>>>>>>>>>>>> Status: Started
>>>>>>>>>>>>> Number of Bricks: 1
>>>>>>>>>>>>> Transport-type: tcp
>>>>>>>>>>>>> Bricks:
>>>>>>>>>>>>> Brick1:
>>>>>>>>>>>>>
>>>>>>>>>>>>> rh1:/var/opt/gluster/bricks/b1/dir
>>>>>>>>>>>>> Options Reconfigured:
>>>>>>>>>>>>> *features.quota: on**
>>>>>>>>>>>>> *
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>> If quotad is not
>>>>>>>>>>>>> running, can you please
>>>>>>>>>>>>> provide glusterd logs
>>>>>>>>>>>>>
>>>>>>>>>>>>> 'usr-local-etc-glusterfs-glusterd.vol.log'.
>>>>>>>>>>>>> I will check is there
>>>>>>>>>>>>> are any issues starting
>>>>>>>>>>>>> quotad.
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>> Thanks,
>>>>>>>>>>>>> Vihay
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>> On Monday 23 March 2015
>>>>>>>>>>>>> 11:54 AM, K.Kondo wrote:
>>>>>>>>>>>>>> Hi Vijay,
>>>>>>>>>>>>>> I could not find
>>>>>>>>>>>>>> the"quotad.log" in
>>>>>>>>>>>>>> directory
>>>>>>>>>>>>>> /var/log/glusterfs in
>>>>>>>>>>>>>> both servers and
>>>>>>>>>>>>>> client. But other test
>>>>>>>>>>>>>> server has the log.
>>>>>>>>>>>>>> Do you know why there
>>>>>>>>>>>>>> is no the file?
>>>>>>>>>>>>>> Thanks,
>>>>>>>>>>>>>> Kondo
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> 2015/03/23 13:41、
>>>>>>>>>>>>>> Vijaikumar M
>>>>>>>>>>>>>> <vmallika at redhat.com
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> <mailto:vmallika at redhat.com>>
>>>>>>>>>>>>>> のメッセージ:
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> Hi Kondo,
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> log file 'quotad.log'
>>>>>>>>>>>>>>> is missing in the
>>>>>>>>>>>>>>> attachment.Can you
>>>>>>>>>>>>>>> provide this log file
>>>>>>>>>>>>>>> as well?
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> Thanks,
>>>>>>>>>>>>>>> Vijay
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> On Monday 23 March
>>>>>>>>>>>>>>> 2015 09:50 AM, kenji
>>>>>>>>>>>>>>> kondo wrote:
>>>>>>>>>>>>>>>> Hi Vijay,
>>>>>>>>>>>>>>>> Could you find the
>>>>>>>>>>>>>>>> attached?
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> Best regards,
>>>>>>>>>>>>>>>> Kondo
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> 2015-03-23 12:53
>>>>>>>>>>>>>>>> GMT+09:00 Vijaikumar
>>>>>>>>>>>>>>>> M
>>>>>>>>>>>>>>>> <vmallika at redhat.com
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> <mailto:vmallika at redhat.com>>:
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> Hi Kondo,
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> Can you please
>>>>>>>>>>>>>>>> provide below
>>>>>>>>>>>>>>>> mentioned
>>>>>>>>>>>>>>>> gluterfs logs?
>>>>>>>>>>>>>>>> client logs
>>>>>>>>>>>>>>>> (name of this
>>>>>>>>>>>>>>>> log will be
>>>>>>>>>>>>>>>> prefixed with
>>>>>>>>>>>>>>>> mount-point
>>>>>>>>>>>>>>>> dirname)
>>>>>>>>>>>>>>>> brick logs
>>>>>>>>>>>>>>>> quotad logs
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> Thanks,
>>>>>>>>>>>>>>>> Vijay
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> On Friday 20
>>>>>>>>>>>>>>>> March 2015 06:31
>>>>>>>>>>>>>>>> PM, kenji kondo
>>>>>>>>>>>>>>>> wrote:
>>>>>>>>>>>>>>>>> Hi, Vijay and
>>>>>>>>>>>>>>>>> Peter
>>>>>>>>>>>>>>>>> Thanks for your
>>>>>>>>>>>>>>>>> reply.
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> I create new
>>>>>>>>>>>>>>>>> volume
>>>>>>>>>>>>>>>>> "testvol" with
>>>>>>>>>>>>>>>>> two bricks and
>>>>>>>>>>>>>>>>> set quota to
>>>>>>>>>>>>>>>>> simplify this
>>>>>>>>>>>>>>>>> problem.
>>>>>>>>>>>>>>>>> I got the
>>>>>>>>>>>>>>>>> glusterfs log
>>>>>>>>>>>>>>>>> as following
>>>>>>>>>>>>>>>>> after try to
>>>>>>>>>>>>>>>>> create a
>>>>>>>>>>>>>>>>> directory and
>>>>>>>>>>>>>>>>> file.
>>>>>>>>>>>>>>>>> BTW, my
>>>>>>>>>>>>>>>>> glusterd was
>>>>>>>>>>>>>>>>> upgraded from
>>>>>>>>>>>>>>>>> older version,
>>>>>>>>>>>>>>>>> although I
>>>>>>>>>>>>>>>>> don't know
>>>>>>>>>>>>>>>>> related to it.
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> Best regards,
>>>>>>>>>>>>>>>>> Kondo
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> [2015-03-20
>>>>>>>>>>>>>>>>> 03:42:52.931016] I
>>>>>>>>>>>>>>>>> [MSGID: 100030]
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> [glusterfsd.c:1998:main]
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> 0-/usr/sbin/glusterfs:
>>>>>>>>>>>>>>>>> Started running
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> /usr/sbin/glusterfs
>>>>>>>>>>>>>>>>> version
>>>>>>>>>>>>>>>>> 3.6.0.29 (args:
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> /usr/sbin/glusterfs
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> --volfile-server=gluster10
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> --volfile-id=testvol
>>>>>>>>>>>>>>>>> testvol)
>>>>>>>>>>>>>>>>> [2015-03-20
>>>>>>>>>>>>>>>>> 03:42:52.944850] I
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> [dht-shared.c:337:dht_init_regex]
>>>>>>>>>>>>>>>>> 0-testvol-dht:
>>>>>>>>>>>>>>>>> using regex
>>>>>>>>>>>>>>>>> rsync-hash-regex =
>>>>>>>>>>>>>>>>> ^\.(.+)\.[^.]+$
>>>>>>>>>>>>>>>>> [2015-03-20
>>>>>>>>>>>>>>>>> 03:42:52.946256] I
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> [client.c:2280:notify]
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> 0-testvol-client-0:
>>>>>>>>>>>>>>>>> parent
>>>>>>>>>>>>>>>>> translators are
>>>>>>>>>>>>>>>>> ready,
>>>>>>>>>>>>>>>>> attempting
>>>>>>>>>>>>>>>>> connect on
>>>>>>>>>>>>>>>>> transport
>>>>>>>>>>>>>>>>> [2015-03-20
>>>>>>>>>>>>>>>>> 03:42:52.950674] I
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> [client.c:2280:notify]
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> 0-testvol-client-1:
>>>>>>>>>>>>>>>>> parent
>>>>>>>>>>>>>>>>> translators are
>>>>>>>>>>>>>>>>> ready,
>>>>>>>>>>>>>>>>> attempting
>>>>>>>>>>>>>>>>> connect on
>>>>>>>>>>>>>>>>> transport
>>>>>>>>>>>>>>>>> Final graph:
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> +------------------------------------------------------------------------------+
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> 1: volume
>>>>>>>>>>>>>>>>> testvol-client-0
>>>>>>>>>>>>>>>>> 2: type
>>>>>>>>>>>>>>>>> protocol/client
>>>>>>>>>>>>>>>>> 3: option
>>>>>>>>>>>>>>>>> ping-timeout 42
>>>>>>>>>>>>>>>>> 4: option
>>>>>>>>>>>>>>>>> remote-host
>>>>>>>>>>>>>>>>> gluster24
>>>>>>>>>>>>>>>>> 5: option
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> remote-subvolume /export25/brick
>>>>>>>>>>>>>>>>> 6: option
>>>>>>>>>>>>>>>>> transport-type
>>>>>>>>>>>>>>>>> socket
>>>>>>>>>>>>>>>>> 7: option
>>>>>>>>>>>>>>>>> send-gids true
>>>>>>>>>>>>>>>>> 8: end-volume
>>>>>>>>>>>>>>>>> 9:
>>>>>>>>>>>>>>>>> 10: volume
>>>>>>>>>>>>>>>>> testvol-client-1
>>>>>>>>>>>>>>>>> 11: type
>>>>>>>>>>>>>>>>> protocol/client
>>>>>>>>>>>>>>>>> 12: option
>>>>>>>>>>>>>>>>> ping-timeout 42
>>>>>>>>>>>>>>>>> 13: option
>>>>>>>>>>>>>>>>> remote-host
>>>>>>>>>>>>>>>>> gluster25
>>>>>>>>>>>>>>>>> 14: option
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> remote-subvolume /export25/brick
>>>>>>>>>>>>>>>>> 15: option
>>>>>>>>>>>>>>>>> transport-type
>>>>>>>>>>>>>>>>> socket
>>>>>>>>>>>>>>>>> 16: option
>>>>>>>>>>>>>>>>> send-gids true
>>>>>>>>>>>>>>>>> 17: end-volume
>>>>>>>>>>>>>>>>> 18:
>>>>>>>>>>>>>>>>> 19: volume
>>>>>>>>>>>>>>>>> testvol-dht
>>>>>>>>>>>>>>>>> 20: type
>>>>>>>>>>>>>>>>> cluster/distribute
>>>>>>>>>>>>>>>>> 21: subvolumes
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> testvol-client-0 testvol-client-1
>>>>>>>>>>>>>>>>> 22: end-volume
>>>>>>>>>>>>>>>>> 23:
>>>>>>>>>>>>>>>>> 24: volume
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> testvol-write-behind
>>>>>>>>>>>>>>>>> 25: type
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> performance/write-behind
>>>>>>>>>>>>>>>>> 26: subvolumes
>>>>>>>>>>>>>>>>> testvol-dht
>>>>>>>>>>>>>>>>> 27: end-volume
>>>>>>>>>>>>>>>>> 28:
>>>>>>>>>>>>>>>>> 29: volume
>>>>>>>>>>>>>>>>> testvol-read-ahead
>>>>>>>>>>>>>>>>> 30: type
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> performance/read-ahead
>>>>>>>>>>>>>>>>> 31: subvolumes
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> testvol-write-behind
>>>>>>>>>>>>>>>>> 32: end-volume
>>>>>>>>>>>>>>>>> 33:
>>>>>>>>>>>>>>>>> 34: volume
>>>>>>>>>>>>>>>>> testvol-io-cache
>>>>>>>>>>>>>>>>> 35: type
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> performance/io-cache
>>>>>>>>>>>>>>>>> 36: subvolumes
>>>>>>>>>>>>>>>>> testvol-read-ahead
>>>>>>>>>>>>>>>>> 37: end-volume
>>>>>>>>>>>>>>>>> 38:
>>>>>>>>>>>>>>>>> 39: volume
>>>>>>>>>>>>>>>>> testvol-quick-read
>>>>>>>>>>>>>>>>> 40: type
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> performance/quick-read
>>>>>>>>>>>>>>>>> 41: subvolumes
>>>>>>>>>>>>>>>>> testvol-io-cache
>>>>>>>>>>>>>>>>> 42: end-volume
>>>>>>>>>>>>>>>>> 43:
>>>>>>>>>>>>>>>>> 44: volume
>>>>>>>>>>>>>>>>> testvol-md-cache
>>>>>>>>>>>>>>>>> 45: type
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> performance/md-cache
>>>>>>>>>>>>>>>>> 46: subvolumes
>>>>>>>>>>>>>>>>> testvol-quick-read
>>>>>>>>>>>>>>>>> 47: end-volume
>>>>>>>>>>>>>>>>> 48:
>>>>>>>>>>>>>>>>> 49: volume
>>>>>>>>>>>>>>>>> testvol
>>>>>>>>>>>>>>>>> 50: type
>>>>>>>>>>>>>>>>> debug/io-stats
>>>>>>>>>>>>>>>>> 51: option
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> latency-measurement
>>>>>>>>>>>>>>>>> off
>>>>>>>>>>>>>>>>> 52: option
>>>>>>>>>>>>>>>>> count-fop-hits off
>>>>>>>>>>>>>>>>> 53: subvolumes
>>>>>>>>>>>>>>>>> testvol-md-cache
>>>>>>>>>>>>>>>>> 54: end-volume
>>>>>>>>>>>>>>>>> 55:
>>>>>>>>>>>>>>>>> 56: volume
>>>>>>>>>>>>>>>>> meta-autoload
>>>>>>>>>>>>>>>>> 57: type meta
>>>>>>>>>>>>>>>>> 58: subvolumes
>>>>>>>>>>>>>>>>> testvol
>>>>>>>>>>>>>>>>> 59: end-volume
>>>>>>>>>>>>>>>>> 60:
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> +------------------------------------------------------------------------------+
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> [2015-03-20
>>>>>>>>>>>>>>>>> 03:42:52.955337] I
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> [rpc-clnt.c:1759:rpc_clnt_reconfig]
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> 0-testvol-client-0:
>>>>>>>>>>>>>>>>> changing port
>>>>>>>>>>>>>>>>> to 49155 (from 0)
>>>>>>>>>>>>>>>>> [2015-03-20
>>>>>>>>>>>>>>>>> 03:42:52.957549] I
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> [rpc-clnt.c:1759:rpc_clnt_reconfig]
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> 0-testvol-client-1:
>>>>>>>>>>>>>>>>> changing port
>>>>>>>>>>>>>>>>> to 49155 (from 0)
>>>>>>>>>>>>>>>>> [2015-03-20
>>>>>>>>>>>>>>>>> 03:42:52.959889] I
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> [client-handshake.c:1415:select_server_supported_programs]
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> 0-testvol-client-0:
>>>>>>>>>>>>>>>>> Using Program
>>>>>>>>>>>>>>>>> GlusterFS 3.3,
>>>>>>>>>>>>>>>>> Num (1298437),
>>>>>>>>>>>>>>>>> Version (330)
>>>>>>>>>>>>>>>>> [2015-03-20
>>>>>>>>>>>>>>>>> 03:42:52.960090] I
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> [client-handshake.c:1415:select_server_supported_programs]
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> 0-testvol-client-1:
>>>>>>>>>>>>>>>>> Using Program
>>>>>>>>>>>>>>>>> GlusterFS 3.3,
>>>>>>>>>>>>>>>>> Num (1298437),
>>>>>>>>>>>>>>>>> Version (330)
>>>>>>>>>>>>>>>>> [2015-03-20
>>>>>>>>>>>>>>>>> 03:42:52.960376] I
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> [client-handshake.c:1200:client_setvolume_cbk]
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> 0-testvol-client-0:
>>>>>>>>>>>>>>>>> Connected to
>>>>>>>>>>>>>>>>> testvol-client-0,
>>>>>>>>>>>>>>>>> attached to
>>>>>>>>>>>>>>>>> remote volume
>>>>>>>>>>>>>>>>> '/export25/brick'.
>>>>>>>>>>>>>>>>> [2015-03-20
>>>>>>>>>>>>>>>>> 03:42:52.960405] I
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> [client-handshake.c:1212:client_setvolume_cbk]
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> 0-testvol-client-0:
>>>>>>>>>>>>>>>>> Server and
>>>>>>>>>>>>>>>>> Client
>>>>>>>>>>>>>>>>> lk-version
>>>>>>>>>>>>>>>>> numbers are not
>>>>>>>>>>>>>>>>> same, reopening
>>>>>>>>>>>>>>>>> the fds
>>>>>>>>>>>>>>>>> [2015-03-20
>>>>>>>>>>>>>>>>> 03:42:52.960471] I
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> [client-handshake.c:1200:client_setvolume_cbk]
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> 0-testvol-client-1:
>>>>>>>>>>>>>>>>> Connected to
>>>>>>>>>>>>>>>>> testvol-client-1,
>>>>>>>>>>>>>>>>> attached to
>>>>>>>>>>>>>>>>> remote volume
>>>>>>>>>>>>>>>>> '/export25/brick'.
>>>>>>>>>>>>>>>>> [2015-03-20
>>>>>>>>>>>>>>>>> 03:42:52.960478] I
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> [client-handshake.c:1212:client_setvolume_cbk]
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> 0-testvol-client-1:
>>>>>>>>>>>>>>>>> Server and
>>>>>>>>>>>>>>>>> Client
>>>>>>>>>>>>>>>>> lk-version
>>>>>>>>>>>>>>>>> numbers are not
>>>>>>>>>>>>>>>>> same, reopening
>>>>>>>>>>>>>>>>> the fds
>>>>>>>>>>>>>>>>> [2015-03-20
>>>>>>>>>>>>>>>>> 03:42:52.962288] I
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> [fuse-bridge.c:5042:fuse_graph_setup]
>>>>>>>>>>>>>>>>> 0-fuse:
>>>>>>>>>>>>>>>>> switched to
>>>>>>>>>>>>>>>>> graph 0
>>>>>>>>>>>>>>>>> [2015-03-20
>>>>>>>>>>>>>>>>> 03:42:52.962351] I
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> [client-handshake.c:188:client_set_lk_version_cbk]
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> 0-testvol-client-1:
>>>>>>>>>>>>>>>>> Server lk
>>>>>>>>>>>>>>>>> version = 1
>>>>>>>>>>>>>>>>> [2015-03-20
>>>>>>>>>>>>>>>>> 03:42:52.962362] I
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> [client-handshake.c:188:client_set_lk_version_cbk]
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> 0-testvol-client-0:
>>>>>>>>>>>>>>>>> Server lk
>>>>>>>>>>>>>>>>> version = 1
>>>>>>>>>>>>>>>>> [2015-03-20
>>>>>>>>>>>>>>>>> 03:42:52.962424] I
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> [fuse-bridge.c:3971:fuse_init]
>>>>>>>>>>>>>>>>> 0-glusterfs-fuse:
>>>>>>>>>>>>>>>>> FUSE inited
>>>>>>>>>>>>>>>>> with protocol
>>>>>>>>>>>>>>>>> versions:
>>>>>>>>>>>>>>>>> glusterfs 7.22
>>>>>>>>>>>>>>>>> kernel 7.14
>>>>>>>>>>>>>>>>> [2015-03-20
>>>>>>>>>>>>>>>>> 03:47:13.352234] I
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> [glusterfsd-mgmt.c:56:mgmt_cbk_spec]
>>>>>>>>>>>>>>>>> 0-mgmt: Volume
>>>>>>>>>>>>>>>>> file changed
>>>>>>>>>>>>>>>>> [2015-03-20
>>>>>>>>>>>>>>>>> 03:47:15.518667] I
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> [dht-shared.c:337:dht_init_regex]
>>>>>>>>>>>>>>>>> 2-testvol-dht:
>>>>>>>>>>>>>>>>> using regex
>>>>>>>>>>>>>>>>> rsync-hash-regex =
>>>>>>>>>>>>>>>>> ^\.(.+)\.[^.]+$
>>>>>>>>>>>>>>>>> [2015-03-20
>>>>>>>>>>>>>>>>> 03:47:15.520034] W
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> [graph.c:344:_log_if_unknown_option]
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> 2-testvol-quota: option
>>>>>>>>>>>>>>>>> 'timeout' is
>>>>>>>>>>>>>>>>> not recognized
>>>>>>>>>>>>>>>>> [2015-03-20
>>>>>>>>>>>>>>>>> 03:47:15.520091] I
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> [client.c:2280:notify]
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> 2-testvol-client-0:
>>>>>>>>>>>>>>>>> parent
>>>>>>>>>>>>>>>>> translators are
>>>>>>>>>>>>>>>>> ready,
>>>>>>>>>>>>>>>>> attempting
>>>>>>>>>>>>>>>>> connect on
>>>>>>>>>>>>>>>>> transport
>>>>>>>>>>>>>>>>> [2015-03-20
>>>>>>>>>>>>>>>>> 03:47:15.524546] I
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> [client.c:2280:notify]
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> 2-testvol-client-1:
>>>>>>>>>>>>>>>>> parent
>>>>>>>>>>>>>>>>> translators are
>>>>>>>>>>>>>>>>> ready,
>>>>>>>>>>>>>>>>> attempting
>>>>>>>>>>>>>>>>> connect on
>>>>>>>>>>>>>>>>> transport
>>>>>>>>>>>>>>>>> Final graph:
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> +------------------------------------------------------------------------------+
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> 1: volume
>>>>>>>>>>>>>>>>> testvol-client-0
>>>>>>>>>>>>>>>>> 2: type
>>>>>>>>>>>>>>>>> protocol/client
>>>>>>>>>>>>>>>>> 3: option
>>>>>>>>>>>>>>>>> ping-timeout 42
>>>>>>>>>>>>>>>>> 4: option
>>>>>>>>>>>>>>>>> remote-host
>>>>>>>>>>>>>>>>> gluster24
>>>>>>>>>>>>>>>>> 5: option
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> remote-subvolume /export25/brick
>>>>>>>>>>>>>>>>> 6: option
>>>>>>>>>>>>>>>>> transport-type
>>>>>>>>>>>>>>>>> socket
>>>>>>>>>>>>>>>>> 7: option
>>>>>>>>>>>>>>>>> send-gids true
>>>>>>>>>>>>>>>>> 8: end-volume
>>>>>>>>>>>>>>>>> 9:
>>>>>>>>>>>>>>>>> 10: volume
>>>>>>>>>>>>>>>>> testvol-client-1
>>>>>>>>>>>>>>>>> 11: type
>>>>>>>>>>>>>>>>> protocol/client
>>>>>>>>>>>>>>>>> 12: option
>>>>>>>>>>>>>>>>> ping-timeout 42
>>>>>>>>>>>>>>>>> 13: option
>>>>>>>>>>>>>>>>> remote-host
>>>>>>>>>>>>>>>>> gluster25
>>>>>>>>>>>>>>>>> 14: option
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> remote-subvolume /export25/brick
>>>>>>>>>>>>>>>>> 15: option
>>>>>>>>>>>>>>>>> transport-type
>>>>>>>>>>>>>>>>> socket
>>>>>>>>>>>>>>>>> 16: option
>>>>>>>>>>>>>>>>> send-gids true
>>>>>>>>>>>>>>>>> 17: end-volume
>>>>>>>>>>>>>>>>> 18:
>>>>>>>>>>>>>>>>> 19: volume
>>>>>>>>>>>>>>>>> testvol-dht
>>>>>>>>>>>>>>>>> 20: type
>>>>>>>>>>>>>>>>> cluster/distribute
>>>>>>>>>>>>>>>>> 21: subvolumes
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> testvol-client-0 testvol-client-1
>>>>>>>>>>>>>>>>> 22: end-volume
>>>>>>>>>>>>>>>>> 23:
>>>>>>>>>>>>>>>>> 24: volume
>>>>>>>>>>>>>>>>> testvol-quota
>>>>>>>>>>>>>>>>> 25: type
>>>>>>>>>>>>>>>>> features/quota
>>>>>>>>>>>>>>>>> 26: option
>>>>>>>>>>>>>>>>> timeout 0
>>>>>>>>>>>>>>>>> 27: option
>>>>>>>>>>>>>>>>> deem-statfs off
>>>>>>>>>>>>>>>>> 28: subvolumes
>>>>>>>>>>>>>>>>> testvol-dht
>>>>>>>>>>>>>>>>> 29: end-volume
>>>>>>>>>>>>>>>>> 30:
>>>>>>>>>>>>>>>>> 31: volume
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> testvol-write-behind
>>>>>>>>>>>>>>>>> 32: type
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> performance/write-behind
>>>>>>>>>>>>>>>>> 33: subvolumes
>>>>>>>>>>>>>>>>> testvol-quota
>>>>>>>>>>>>>>>>> 34: end-volume
>>>>>>>>>>>>>>>>> 35:
>>>>>>>>>>>>>>>>> 36: volume
>>>>>>>>>>>>>>>>> testvol-read-ahead
>>>>>>>>>>>>>>>>> 37: type
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> performance/read-ahead
>>>>>>>>>>>>>>>>> 38: subvolumes
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> testvol-write-behind
>>>>>>>>>>>>>>>>> 39: end-volume
>>>>>>>>>>>>>>>>> 40:
>>>>>>>>>>>>>>>>> 41: volume
>>>>>>>>>>>>>>>>> testvol-io-cache
>>>>>>>>>>>>>>>>> 42: type
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> performance/io-cache
>>>>>>>>>>>>>>>>> 43: subvolumes
>>>>>>>>>>>>>>>>> testvol-read-ahead
>>>>>>>>>>>>>>>>> 44: end-volume
>>>>>>>>>>>>>>>>> 45:
>>>>>>>>>>>>>>>>> 46: volume
>>>>>>>>>>>>>>>>> testvol-quick-read
>>>>>>>>>>>>>>>>> 47: type
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> performance/quick-read
>>>>>>>>>>>>>>>>> 48: subvolumes
>>>>>>>>>>>>>>>>> testvol-io-cache
>>>>>>>>>>>>>>>>> 49: end-volume
>>>>>>>>>>>>>>>>> 50:
>>>>>>>>>>>>>>>>> 51: volume
>>>>>>>>>>>>>>>>> testvol-md-cache
>>>>>>>>>>>>>>>>> 52: type
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> performance/md-cache
>>>>>>>>>>>>>>>>> 53: subvolumes
>>>>>>>>>>>>>>>>> testvol-quick-read
>>>>>>>>>>>>>>>>> 54: end-volume
>>>>>>>>>>>>>>>>> 55:
>>>>>>>>>>>>>>>>> 56: volume
>>>>>>>>>>>>>>>>> testvol
>>>>>>>>>>>>>>>>> 57: type
>>>>>>>>>>>>>>>>> debug/io-stats
>>>>>>>>>>>>>>>>> 58: option
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> latency-measurement
>>>>>>>>>>>>>>>>> off
>>>>>>>>>>>>>>>>> 59: option
>>>>>>>>>>>>>>>>> count-fop-hits off
>>>>>>>>>>>>>>>>> 60: subvolumes
>>>>>>>>>>>>>>>>> testvol-md-cache
>>>>>>>>>>>>>>>>> 61: end-volume
>>>>>>>>>>>>>>>>> 62:
>>>>>>>>>>>>>>>>> 63: volume
>>>>>>>>>>>>>>>>> meta-autoload
>>>>>>>>>>>>>>>>> 64: type meta
>>>>>>>>>>>>>>>>> 65: subvolumes
>>>>>>>>>>>>>>>>> testvol
>>>>>>>>>>>>>>>>> 66: end-volume
>>>>>>>>>>>>>>>>> 67:
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> +------------------------------------------------------------------------------+
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> [2015-03-20
>>>>>>>>>>>>>>>>> 03:47:15.530005] I
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> [rpc-clnt.c:1759:rpc_clnt_reconfig]
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> 2-testvol-client-1:
>>>>>>>>>>>>>>>>> changing port
>>>>>>>>>>>>>>>>> to 49155 (from 0)
>>>>>>>>>>>>>>>>> [2015-03-20
>>>>>>>>>>>>>>>>> 03:47:15.530047] I
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> [rpc-clnt.c:1759:rpc_clnt_reconfig]
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> 2-testvol-client-0:
>>>>>>>>>>>>>>>>> changing port
>>>>>>>>>>>>>>>>> to 49155 (from 0)
>>>>>>>>>>>>>>>>> [2015-03-20
>>>>>>>>>>>>>>>>> 03:47:15.539062] I
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> [client-handshake.c:1415:select_server_supported_programs]
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> 2-testvol-client-1:
>>>>>>>>>>>>>>>>> Using Program
>>>>>>>>>>>>>>>>> GlusterFS 3.3,
>>>>>>>>>>>>>>>>> Num (1298437),
>>>>>>>>>>>>>>>>> Version (330)
>>>>>>>>>>>>>>>>> [2015-03-20
>>>>>>>>>>>>>>>>> 03:47:15.539299] I
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> [client-handshake.c:1415:select_server_supported_programs]
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> 2-testvol-client-0:
>>>>>>>>>>>>>>>>> Using Program
>>>>>>>>>>>>>>>>> GlusterFS 3.3,
>>>>>>>>>>>>>>>>> Num (1298437),
>>>>>>>>>>>>>>>>> Version (330)
>>>>>>>>>>>>>>>>> [2015-03-20
>>>>>>>>>>>>>>>>> 03:47:15.539462] I
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> [client-handshake.c:1200:client_setvolume_cbk]
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> 2-testvol-client-1:
>>>>>>>>>>>>>>>>> Connected to
>>>>>>>>>>>>>>>>> testvol-client-1,
>>>>>>>>>>>>>>>>> attached to
>>>>>>>>>>>>>>>>> remote volume
>>>>>>>>>>>>>>>>> '/export25/brick'.
>>>>>>>>>>>>>>>>> [2015-03-20
>>>>>>>>>>>>>>>>> 03:47:15.539485] I
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> [client-handshake.c:1212:client_setvolume_cbk]
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> 2-testvol-client-1:
>>>>>>>>>>>>>>>>> Server and
>>>>>>>>>>>>>>>>> Client
>>>>>>>>>>>>>>>>> lk-version
>>>>>>>>>>>>>>>>> numbers are not
>>>>>>>>>>>>>>>>> same, reopening
>>>>>>>>>>>>>>>>> the fds
>>>>>>>>>>>>>>>>> [2015-03-20
>>>>>>>>>>>>>>>>> 03:47:15.539729] I
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> [client-handshake.c:1200:client_setvolume_cbk]
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> 2-testvol-client-0:
>>>>>>>>>>>>>>>>> Connected to
>>>>>>>>>>>>>>>>> testvol-client-0,
>>>>>>>>>>>>>>>>> attached to
>>>>>>>>>>>>>>>>> remote volume
>>>>>>>>>>>>>>>>> '/export25/brick'.
>>>>>>>>>>>>>>>>> [2015-03-20
>>>>>>>>>>>>>>>>> 03:47:15.539751] I
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> [client-handshake.c:1212:client_setvolume_cbk]
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> 2-testvol-client-0:
>>>>>>>>>>>>>>>>> Server and
>>>>>>>>>>>>>>>>> Client
>>>>>>>>>>>>>>>>> lk-version
>>>>>>>>>>>>>>>>> numbers are not
>>>>>>>>>>>>>>>>> same, reopening
>>>>>>>>>>>>>>>>> the fds
>>>>>>>>>>>>>>>>> [2015-03-20
>>>>>>>>>>>>>>>>> 03:47:15.542878] I
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> [fuse-bridge.c:5042:fuse_graph_setup]
>>>>>>>>>>>>>>>>> 0-fuse:
>>>>>>>>>>>>>>>>> switched to
>>>>>>>>>>>>>>>>> graph 2
>>>>>>>>>>>>>>>>> [2015-03-20
>>>>>>>>>>>>>>>>> 03:47:15.542959] I
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> [client-handshake.c:188:client_set_lk_version_cbk]
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> 2-testvol-client-1:
>>>>>>>>>>>>>>>>> Server lk
>>>>>>>>>>>>>>>>> version = 1
>>>>>>>>>>>>>>>>> [2015-03-20
>>>>>>>>>>>>>>>>> 03:47:15.542987] I
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> [client-handshake.c:188:client_set_lk_version_cbk]
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> 2-testvol-client-0:
>>>>>>>>>>>>>>>>> Server lk
>>>>>>>>>>>>>>>>> version = 1
>>>>>>>>>>>>>>>>> [2015-03-20
>>>>>>>>>>>>>>>>> 03:48:04.586291] I
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> [client.c:2289:notify]
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> 0-testvol-client-0:
>>>>>>>>>>>>>>>>> current graph
>>>>>>>>>>>>>>>>> is no longer
>>>>>>>>>>>>>>>>> active,
>>>>>>>>>>>>>>>>> destroying
>>>>>>>>>>>>>>>>> rpc_client
>>>>>>>>>>>>>>>>> [2015-03-20
>>>>>>>>>>>>>>>>> 03:48:04.586360] I
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> [client.c:2289:notify]
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> 0-testvol-client-1:
>>>>>>>>>>>>>>>>> current graph
>>>>>>>>>>>>>>>>> is no longer
>>>>>>>>>>>>>>>>> active,
>>>>>>>>>>>>>>>>> destroying
>>>>>>>>>>>>>>>>> rpc_client
>>>>>>>>>>>>>>>>> [2015-03-20
>>>>>>>>>>>>>>>>> 03:48:04.586378] I
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> [client.c:2215:client_rpc_notify]
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> 0-testvol-client-0:
>>>>>>>>>>>>>>>>> disconnected
>>>>>>>>>>>>>>>>> from
>>>>>>>>>>>>>>>>> testvol-client-0.
>>>>>>>>>>>>>>>>> Client process
>>>>>>>>>>>>>>>>> will keep
>>>>>>>>>>>>>>>>> trying to
>>>>>>>>>>>>>>>>> connect to
>>>>>>>>>>>>>>>>> glusterd until
>>>>>>>>>>>>>>>>> brick's port is
>>>>>>>>>>>>>>>>> available
>>>>>>>>>>>>>>>>> [2015-03-20
>>>>>>>>>>>>>>>>> 03:48:04.586430] I
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> [client.c:2215:client_rpc_notify]
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> 0-testvol-client-1:
>>>>>>>>>>>>>>>>> disconnected
>>>>>>>>>>>>>>>>> from
>>>>>>>>>>>>>>>>> testvol-client-1.
>>>>>>>>>>>>>>>>> Client process
>>>>>>>>>>>>>>>>> will keep
>>>>>>>>>>>>>>>>> trying to
>>>>>>>>>>>>>>>>> connect to
>>>>>>>>>>>>>>>>> glusterd until
>>>>>>>>>>>>>>>>> brick's port is
>>>>>>>>>>>>>>>>> available
>>>>>>>>>>>>>>>>> [2015-03-20
>>>>>>>>>>>>>>>>> 03:48:04.589552] W
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> [client-rpc-fops.c:306:client3_3_mkdir_cbk]
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> 2-testvol-client-0:
>>>>>>>>>>>>>>>>> remote
>>>>>>>>>>>>>>>>> operation
>>>>>>>>>>>>>>>>> failed:
>>>>>>>>>>>>>>>>> Transport
>>>>>>>>>>>>>>>>> endpoint is not
>>>>>>>>>>>>>>>>> connected.
>>>>>>>>>>>>>>>>> Path: /test/a
>>>>>>>>>>>>>>>>> [2015-03-20
>>>>>>>>>>>>>>>>> 03:48:04.589608] W
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> [fuse-bridge.c:481:fuse_entry_cbk]
>>>>>>>>>>>>>>>>> 0-glusterfs-fuse:
>>>>>>>>>>>>>>>>> 78: MKDIR()
>>>>>>>>>>>>>>>>> /test/a => -1
>>>>>>>>>>>>>>>>> (Transport
>>>>>>>>>>>>>>>>> endpoint is not
>>>>>>>>>>>>>>>>> connected)
>>>>>>>>>>>>>>>>> [2015-03-20
>>>>>>>>>>>>>>>>> 03:48:11.073349] W
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> [client-rpc-fops.c:2212:client3_3_create_cbk]
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> 2-testvol-client-1:
>>>>>>>>>>>>>>>>> remote
>>>>>>>>>>>>>>>>> operation
>>>>>>>>>>>>>>>>> failed:
>>>>>>>>>>>>>>>>> Transport
>>>>>>>>>>>>>>>>> endpoint is not
>>>>>>>>>>>>>>>>> connected.
>>>>>>>>>>>>>>>>> Path: /test/f
>>>>>>>>>>>>>>>>> [2015-03-20
>>>>>>>>>>>>>>>>> 03:48:11.073419] W
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> [fuse-bridge.c:1937:fuse_create_cbk]
>>>>>>>>>>>>>>>>> 0-glusterfs-fuse:
>>>>>>>>>>>>>>>>> 82: /test/f =>
>>>>>>>>>>>>>>>>> -1 (Transport
>>>>>>>>>>>>>>>>> endpoint is not
>>>>>>>>>>>>>>>>> connected)
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> 2015-03-20
>>>>>>>>>>>>>>>>> 11:27 GMT+09:00
>>>>>>>>>>>>>>>>> Vijaikumar M
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> <vmallika at redhat.com
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> <mailto:vmallika at redhat.com>>:
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> Hi Kondo,
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> Can you
>>>>>>>>>>>>>>>>> please
>>>>>>>>>>>>>>>>> provide all
>>>>>>>>>>>>>>>>> the
>>>>>>>>>>>>>>>>> glusterfs
>>>>>>>>>>>>>>>>> log files?
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> Thanks,
>>>>>>>>>>>>>>>>> Vijay
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> On Friday
>>>>>>>>>>>>>>>>> 20 March
>>>>>>>>>>>>>>>>> 2015 07:33
>>>>>>>>>>>>>>>>> AM, K.Kondo
>>>>>>>>>>>>>>>>> wrote:
>>>>>>>>>>>>>>>>>> Hello,
>>>>>>>>>>>>>>>>>> experts
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> I had a
>>>>>>>>>>>>>>>>>> trouble
>>>>>>>>>>>>>>>>>> about quota.
>>>>>>>>>>>>>>>>>> I set
>>>>>>>>>>>>>>>>>> quota to
>>>>>>>>>>>>>>>>>> one
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> distributed volume
>>>>>>>>>>>>>>>>>> "vol12" as
>>>>>>>>>>>>>>>>>> bellow.
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> gluster>
>>>>>>>>>>>>>>>>>> volume
>>>>>>>>>>>>>>>>>> quota
>>>>>>>>>>>>>>>>>> vol12 enable
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> volume
>>>>>>>>>>>>>>>>>> quota :
>>>>>>>>>>>>>>>>>> success
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> gluster>
>>>>>>>>>>>>>>>>>> volume
>>>>>>>>>>>>>>>>>> quota
>>>>>>>>>>>>>>>>>> vol12
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> limit-usage /test
>>>>>>>>>>>>>>>>>> 10GB
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> volume
>>>>>>>>>>>>>>>>>> quota :
>>>>>>>>>>>>>>>>>> success
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> But I
>>>>>>>>>>>>>>>>>> couldn't
>>>>>>>>>>>>>>>>>> create a
>>>>>>>>>>>>>>>>>> file and
>>>>>>>>>>>>>>>>>> directory
>>>>>>>>>>>>>>>>>> with below
>>>>>>>>>>>>>>>>>> error
>>>>>>>>>>>>>>>>>> message.
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> In a
>>>>>>>>>>>>>>>>>> client host,
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> $cd test
>>>>>>>>>>>>>>>>>> (mounted
>>>>>>>>>>>>>>>>>> using fuse)
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> $mkdir a
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> mkdir:
>>>>>>>>>>>>>>>>>> cannot
>>>>>>>>>>>>>>>>>> create
>>>>>>>>>>>>>>>>>> directory
>>>>>>>>>>>>>>>>>> `a':
>>>>>>>>>>>>>>>>>> Transport
>>>>>>>>>>>>>>>>>> endpoint
>>>>>>>>>>>>>>>>>> is not
>>>>>>>>>>>>>>>>>> connected
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> Additionally,
>>>>>>>>>>>>>>>>>> I couldn't
>>>>>>>>>>>>>>>>>> check
>>>>>>>>>>>>>>>>>> quota
>>>>>>>>>>>>>>>>>> status
>>>>>>>>>>>>>>>>>> using
>>>>>>>>>>>>>>>>>> gluster
>>>>>>>>>>>>>>>>>> command.
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> gluster>
>>>>>>>>>>>>>>>>>> volume
>>>>>>>>>>>>>>>>>> quota
>>>>>>>>>>>>>>>>>> vol12 list
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> Path
>>>>>>>>>>>>>>>>>> Hard-limit
>>>>>>>>>>>>>>>>>> Soft-limit
>>>>>>>>>>>>>>>>>> Used
>>>>>>>>>>>>>>>>>> Available
>>>>>>>>>>>>>>>>>> Soft-limit
>>>>>>>>>>>>>>>>>> exceeded?
>>>>>>>>>>>>>>>>>> Hard-limit
>>>>>>>>>>>>>>>>>> exceeded?
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> ---------------------------------------------------------------------------------------------------------------------------
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> Here,
>>>>>>>>>>>>>>>>>> this
>>>>>>>>>>>>>>>>>> command
>>>>>>>>>>>>>>>>>> stops, so
>>>>>>>>>>>>>>>>>> I have to
>>>>>>>>>>>>>>>>>> do Ctrl-C.
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> Gluster
>>>>>>>>>>>>>>>>>> version is
>>>>>>>>>>>>>>>>>> 3.6.1 and
>>>>>>>>>>>>>>>>>> 3.6.0.29
>>>>>>>>>>>>>>>>>> for server
>>>>>>>>>>>>>>>>>> and client
>>>>>>>>>>>>>>>>>> respectively.
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> Any idea
>>>>>>>>>>>>>>>>>> for this?
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> Best regards,
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> K. Kondo
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> _______________________________________________
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> Gluster-users mailing list
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> Gluster-users at gluster.org
>>>>>>>>>>>>>>>>>> <mailto:Gluster-users at gluster.org>
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> http://www.gluster.org/mailman/listinfo/gluster-users
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>
>>>>>
>>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>
>>
>
>
>
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
>
--
~Atin
More information about the Gluster-users
mailing list