[Gluster-devel] Input/output error when files in .shard folder are deleted

Krutika Dhananjay kdhananj at redhat.com
Thu Oct 27 08:26:52 UTC 2016


Found the RC. The problem seems to be that sharding translator attempts to
create
non-existent shards in read/write codepaths with a newly generated gfid
attached
to the create request in case the shard is absent. Replicate translator,
which sits below
sharding on the stack takes this request and plays it on all of its
replicas. On two of them it
fails with EEXIST, and on the one where the shards were removed from the
backend, the
shard path is created but with the newly generated gfid while the other two
replicas continue to
hold the original gfid (the one prior to rm -rf). Although this can be
fixed, it will require one
additional lookup for each shard for each read/write operation, causing the
latency of the read/write
response to the application to increase by a factor of 1 network call.

The test you're doing is partially (but not fully) manipulating and
removing data from the backend,
which is not recommended.

My question to you is this - what is the specific failure that you are
trying to simulate with removal of
contents of .shard? Normally, the `rm -rf on backend` type of tests are
performed to simulate disk
failure and its replacement with a brand new disk, in which case executing
the replace-brick/reset-brick
commands should be sufficient to recover all contents from the remaining
two replicas.

-Krutika

On Thu, Oct 27, 2016 at 12:49 PM, Krutika Dhananjay <kdhananj at redhat.com>
wrote:

> Now it's reproducible, thanks. :)
>
> I think I know the RC. Let me confirm it through tests and report back.
>
> -Krutika
>
> On Thu, Oct 27, 2016 at 10:42 AM, qingwei wei <tchengwee at gmail.com> wrote:
>
>> Hi,
>>
>> I did few more test runs and it seems that it happens during this sequence
>>
>> 1.populate data using dd
>> 2. delete away ALL the shard files in one of the brick .shard folder
>> 3. Trying to access using dd, no error reported
>> 4. umount and mount.
>> 5. Trying to access using dd, no error reported
>> 6. umount and mount.
>> 7. Trying to access using dd and Input/Output error reported
>>
>> during step 3 and 4, no file is created under the .shard directory
>> For step 7, a shard file is created with same file name but different
>> gfid compare to other good replicas.
>>
>> Below is the client log and brick log with more details in the attached
>> log.
>>
>> Client log
>>
>> [2016-10-27 04:34:46.493281] D [MSGID: 0]
>> [shard.c:3138:shard_common_mknod_cbk] 0-testHeal4-shard: mknod of
>> shard 1 failed: File exists
>> [2016-10-27 04:34:46.493351] D [MSGID: 0]
>> [dht-common.c:2633:dht_lookup] 0-testHeal4-dht: Calling fresh lookup
>> for /.shard/76bc4b0f-bb18-4736-8327-99098cd0d7ce.1 on
>> testHeal4-replicate-0
>> [2016-10-27 04:34:46.494646] W [MSGID: 114031]
>> [client-rpc-fops.c:2981:client3_3_lookup_cbk] 0-testHeal4-client-0:
>> remote operation failed. Path: (null)
>> (00000000-0000-0000-0000-000000000000) [Invalid argument]
>> [2016-10-27 04:34:46.494673] D [MSGID: 0]
>> [client-rpc-fops.c:2989:client3_3_lookup_cbk] 0-stack-trace:
>> stack-address: 0x7f9083edc1c8, testHeal4-client-0 returned -1 error:
>> Invalid argument [Invalid argument]
>> [2016-10-27 04:34:46.494705] W [MSGID: 114031]
>> [client-rpc-fops.c:2981:client3_3_lookup_cbk] 0-testHeal4-client-1:
>> remote operation failed. Path: (null)
>> (00000000-0000-0000-0000-000000000000) [Invalid argument]
>> [2016-10-27 04:34:46.494710] W [MSGID: 114031]
>> [client-rpc-fops.c:2981:client3_3_lookup_cbk] 0-testHeal4-client-2:
>> remote operation failed. Path: (null)
>> (00000000-0000-0000-0000-000000000000) [Invalid argument]
>> [2016-10-27 04:34:46.494730] D [MSGID: 0]
>> [client-rpc-fops.c:2989:client3_3_lookup_cbk] 0-stack-trace:
>> stack-address: 0x7f9083edc1c8, testHeal4-client-1 returned -1 error:
>> Invalid argument [Invalid argument]
>> [2016-10-27 04:34:46.494751] D [MSGID: 0]
>> [client-rpc-fops.c:2989:client3_3_lookup_cbk] 0-stack-trace:
>> stack-address: 0x7f9083edc1c8, testHeal4-client-2 returned -1 error:
>> Invalid argument [Invalid argument]
>> [2016-10-27 04:34:46.495339] D [MSGID: 0]
>> [afr-common.c:1986:afr_lookup_done] 0-stack-trace: stack-address:
>> 0x7f9083edbb1c, testHeal4-replicate-0 returned -1 error: Input/output
>> error [Input/output error]
>> [2016-10-27 04:34:46.495364] D [MSGID: 0]
>> [dht-common.c:2220:dht_lookup_cbk] 0-testHeal4-dht: fresh_lookup
>> returned for /.shard/76bc4b0f-bb18-4736-8327-99098cd0d7ce.1 with
>> op_ret -1 [Input/output error]
>> [2016-10-27 04:34:46.495374] D [MSGID: 0]
>> [dht-common.c:2300:dht_lookup_cbk] 0-testHeal4-dht: Lookup of
>> /.shard/76bc4b0f-bb18-4736-8327-99098cd0d7ce.1 for subvolume
>> testHeal4-replicate-0 failed [Input/output error]
>> [2016-10-27 04:34:46.495384] D [MSGID: 0]
>> [dht-common.c:2363:dht_lookup_cbk] 0-stack-trace: stack-address:
>> 0x7f9083edbb1c, testHeal4-dht returned -1 error: Input/output error
>> [Input/output error]
>> [2016-10-27 04:34:46.495395] E [MSGID: 133010]
>> [shard.c:1582:shard_common_lookup_shards_cbk] 0-testHeal4-shard:
>> Lookup on shard 1 failed. Base file gfid =
>> 76bc4b0f-bb18-4736-8327-99098cd0d7ce [Input/output error]
>> [2016-10-27 04:34:46.495406] D [MSGID: 0]
>> [shard.c:3086:shard_post_lookup_shards_readv_handler] 0-stack-trace:
>> stack-address: 0x7f9083edbb1c, testHeal4-shard returned -1 error:
>> Input/output error [Input/output error]
>> [2016-10-27 04:34:46.495417] D [MSGID: 0]
>> [defaults.c:1010:default_readv_cbk] 0-stack-trace: stack-address:
>> 0x7f9083edbb1c, testHeal4-write-behind returned -1 error: Input/output
>> error [Input/output error]
>> [2016-10-27 04:34:46.495428] D [MSGID: 0]
>> [read-ahead.c:462:ra_readv_disabled_cbk] 0-stack-trace: stack-address:
>> 0x7f9083edbb1c, testHeal4-read-ahead returned -1 error: Input/output
>> error [Input/output error]
>>
>> brick log
>>
>> [2016-10-27 04:34:46.492055] D [MSGID: 0]
>> [io-threads.c:351:iot_schedule] 0-testHeal4-io-threads: STATFS
>> scheduled as fast fop
>> [2016-10-27 04:34:46.492157] D [client_t.c:333:gf_client_ref]
>> (-->/usr/lib64/glusterfs/3.7.16/xlator/protocol/server.so(se
>> rver3_3_entrylk+0x93)
>> [0x7efebb37d633]
>> -->/usr/lib64/glusterfs/3.7.16/xlator/protocol/server.so(get
>> _frame_from_request+0x257)
>> [0x7efebb36cfd7] -->/lib64/libglusterfs.so.0(gf_client_ref+0x68)
>> [0x7efecfadf608] ) 0-client_t:
>> fujitsu05.dctopenstack.org-6064-2016/10/27-04:34:44:217958-
>> testHeal4-client-1-0-0:
>> ref-count 3
>> [2016-10-27 04:34:46.492180] D [MSGID: 0]
>> [io-threads.c:351:iot_schedule] 0-testHeal4-io-threads: ENTRYLK
>> scheduled as normal fop
>> [2016-10-27 04:34:46.492239] D [client_t.c:417:gf_client_unref]
>> (-->/usr/lib64/glusterfs/3.7.16/xlator/protocol/server.so(se
>> rver_statfs_cbk+0x112)
>> [0x7efebb36f8e2]
>> -->/usr/lib64/glusterfs/3.7.16/xlator/protocol/server.so(ser
>> ver_submit_reply+0x123)
>> [0x7efebb368f13] -->/lib64/libglusterfs.so.0(gf_client_unref+0x77)
>> [0x7efecfadf787] ) 0-client_t:
>> fujitsu05.dctopenstack.org-6064-2016/10/27-04:34:44:217958-
>> testHeal4-client-1-0-0:
>> ref-count 2
>> [2016-10-27 04:34:46.492271] D [client_t.c:417:gf_client_unref]
>> (-->/usr/lib64/glusterfs/3.7.16/xlator/protocol/server.so(se
>> rver_entrylk_cbk+0xa6)
>> [0x7efebb3713a6]
>> -->/usr/lib64/glusterfs/3.7.16/xlator/protocol/server.so(ser
>> ver_submit_reply+0x123)
>> [0x7efebb368f13] -->/lib64/libglusterfs.so.0(gf_client_unref+0x77)
>> [0x7efecfadf787] ) 0-client_t:
>> fujitsu05.dctopenstack.org-6064-2016/10/27-04:34:44:217958-
>> testHeal4-client-1-0-0:
>> ref-count 1
>> [2016-10-27 04:34:46.492535] D [client_t.c:333:gf_client_ref]
>> (-->/usr/lib64/glusterfs/3.7.16/xlator/protocol/server.so(se
>> rver3_3_mknod+0x80)
>> [0x7efebb37b690]
>> -->/usr/lib64/glusterfs/3.7.16/xlator/protocol/server.so(get
>> _frame_from_request+0x257)
>> [0x7efebb36cfd7] -->/lib64/libglusterfs.so.0(gf_client_ref+0x68)
>> [0x7efecfadf608] ) 0-client_t:
>> fujitsu05.dctopenstack.org-6064-2016/10/27-04:34:44:217958-
>> testHeal4-client-1-0-0:
>> ref-count 2
>> [2016-10-27 04:34:46.492565] D [MSGID: 0]
>> [io-threads.c:351:iot_schedule] 0-testHeal4-io-threads: MKNOD
>> scheduled as normal fop
>> [2016-10-27 04:34:46.492843] D [client_t.c:417:gf_client_unref]
>> (-->/usr/lib64/glusterfs/3.7.16/xlator/protocol/server.so(se
>> rver_mknod_cbk+0x5ad)
>> [0x7efebb383c9d]
>> -->/usr/lib64/glusterfs/3.7.16/xlator/protocol/server.so(ser
>> ver_submit_reply+0x123)
>> [0x7efebb368f13] -->/lib64/libglusterfs.so.0(gf_client_unref+0x77)
>> [0x7efecfadf787] ) 0-client_t:
>> fujitsu05.dctopenstack.org-6064-2016/10/27-04:34:44:217958-
>> testHeal4-client-1-0-0:
>> ref-count 1
>> [2016-10-27 04:34:46.492981] D [client_t.c:333:gf_client_ref]
>> (-->/usr/lib64/glusterfs/3.7.16/xlator/protocol/server.so(se
>> rver3_3_xattrop+0x86)
>> [0x7efebb3789d6]
>> -->/usr/lib64/glusterfs/3.7.16/xlator/protocol/server.so(get
>> _frame_from_request+0x257)
>> [0x7efebb36cfd7] -->/lib64/libglusterfs.so.0(gf_client_ref+0x68)
>> [0x7efecfadf608] ) 0-client_t:
>> fujitsu05.dctopenstack.org-6064-2016/10/27-04:34:44:217958-
>> testHeal4-client-1-0-0:
>> ref-count 2
>> [2016-10-27 04:34:46.493056] D [MSGID: 0]
>> [io-threads.c:351:iot_schedule] 0-testHeal4-io-threads: XATTROP
>> scheduled as slow fop
>> [2016-10-27 04:34:46.493128] D [client_t.c:333:gf_client_ref]
>> (-->/usr/lib64/glusterfs/3.7.16/xlator/protocol/server.so(se
>> rver3_3_entrylk+0x93)
>> [0x7efebb37d633]
>> -->/usr/lib64/glusterfs/3.7.16/xlator/protocol/server.so(get
>> _frame_from_request+0x257)
>> [0x7efebb36cfd7] -->/lib64/libglusterfs.so.0(gf_client_ref+0x68)
>> [0x7efecfadf608] ) 0-client_t:
>> fujitsu05.dctopenstack.org-6064-2016/10/27-04:34:44:217958-
>> testHeal4-client-1-0-0:
>> ref-count 3
>> [2016-10-27 04:34:46.493148] D [MSGID: 0]
>> [io-threads.c:351:iot_schedule] 0-testHeal4-io-threads: ENTRYLK
>> scheduled as normal fop
>> [2016-10-27 04:34:46.493214] D [client_t.c:417:gf_client_unref]
>> (-->/usr/lib64/glusterfs/3.7.16/xlator/protocol/server.so(se
>> rver_xattrop_cbk+0xd9)
>> [0x7efebb370579]
>> -->/usr/lib64/glusterfs/3.7.16/xlator/protocol/server.so(ser
>> ver_submit_reply+0x123)
>> [0x7efebb368f13] -->/lib64/libglusterfs.so.0(gf_client_unref+0x77)
>> [0x7efecfadf787] ) 0-client_t:
>> fujitsu05.dctopenstack.org-6064-2016/10/27-04:34:44:217958-
>> testHeal4-client-1-0-0:
>> ref-count 2
>> [2016-10-27 04:34:46.493239] D [client_t.c:417:gf_client_unref]
>> (-->/usr/lib64/glusterfs/3.7.16/xlator/protocol/server.so(se
>> rver_entrylk_cbk+0xa6)
>> [0x7efebb3713a6]
>> -->/usr/lib64/glusterfs/3.7.16/xlator/protocol/server.so(ser
>> ver_submit_reply+0x123)
>> [0x7efebb368f13] -->/lib64/libglusterfs.so.0(gf_client_unref+0x77)
>> [0x7efecfadf787] ) 0-client_t:
>> fujitsu05.dctopenstack.org-6064-2016/10/27-04:34:44:217958-
>> testHeal4-client-1-0-0:
>> ref-count 1
>> [2016-10-27 04:34:46.493490] D [client_t.c:333:gf_client_ref]
>> (-->/usr/lib64/glusterfs/3.7.16/xlator/protocol/server.so(se
>> rver3_3_lookup+0x8b)
>> [0x7efebb386beb]
>> -->/usr/lib64/glusterfs/3.7.16/xlator/protocol/server.so(get
>> _frame_from_request+0x257)
>> [0x7efebb36cfd7] -->/lib64/libglusterfs.so.0(gf_client_ref+0x68)
>> [0x7efecfadf608] ) 0-client_t:
>> fujitsu05.dctopenstack.org-6064-2016/10/27-04:34:44:217958-
>> testHeal4-client-1-0-0:
>> ref-count 2
>> [2016-10-27 04:34:46.493514] D [MSGID: 0]
>> [io-threads.c:351:iot_schedule] 0-testHeal4-io-threads: LOOKUP
>> scheduled as fast fop
>> [2016-10-27 04:34:46.493666] D [client_t.c:417:gf_client_unref]
>> (-->/usr/lib64/glusterfs/3.7.16/xlator/protocol/server.so(se
>> rver_lookup_cbk+0x548)
>> [0x7efebb3864c8]
>> -->/usr/lib64/glusterfs/3.7.16/xlator/protocol/server.so(ser
>> ver_submit_reply+0x123)
>> [0x7efebb368f13] -->/lib64/libglusterfs.so.0(gf_client_unref+0x77)
>> [0x7efecfadf787] ) 0-client_t:
>> fujitsu05.dctopenstack.org-6064-2016/10/27-04:34:44:217958-
>> testHeal4-client-1-0-0:
>> ref-count 1
>> [2016-10-27 04:34:46.493782] D [client_t.c:333:gf_client_ref]
>> (-->/usr/lib64/glusterfs/3.7.16/xlator/protocol/server.so(se
>> rver3_3_lookup+0x8b)
>> [0x7efebb386beb]
>> -->/usr/lib64/glusterfs/3.7.16/xlator/protocol/server.so(get
>> _frame_from_request+0x257)
>> [0x7efebb36cfd7] -->/lib64/libglusterfs.so.0(gf_client_ref+0x68)
>> [0x7efecfadf608] ) 0-client_t:
>> fujitsu05.dctopenstack.org-6064-2016/10/27-04:34:44:217958-
>> testHeal4-client-1-0-0:
>> ref-count 2
>> [2016-10-27 04:34:46.493986] D [client_t.c:417:gf_client_unref]
>> (-->/usr/lib64/glusterfs/3.7.16/xlator/protocol/server.so(se
>> rver_lookup_cbk+0x548)
>> [0x7efebb3864c8]
>> -->/usr/lib64/glusterfs/3.7.16/xlator/protocol/server.so(ser
>> ver_submit_reply+0x123)
>> [0x7efebb368f13] -->/lib64/libglusterfs.so.0(gf_client_unref+0x77)
>> [0x7efecfadf787] ) 0-client_t:
>> fujitsu05.dctopenstack.org-6064-2016/10/27-04:34:44:217958-
>> testHeal4-client-1-0-0:
>> ref-count 1
>> [2016-10-27 04:34:46.494596] D [client_t.c:333:gf_client_ref]
>> (-->/usr/lib64/glusterfs/3.7.16/xlator/protocol/server.so(se
>> rver3_3_lookup+0x8b)
>> [0x7efebb386beb]
>> -->/usr/lib64/glusterfs/3.7.16/xlator/protocol/server.so(get
>> _frame_from_request+0x257)
>> [0x7efebb36cfd7] -->/lib64/libglusterfs.so.0(gf_client_ref+0x68)
>> [0x7efecfadf608] ) 0-client_t:
>> fujitsu05.dctopenstack.org-6064-2016/10/27-04:34:44:217958-
>> testHeal4-client-1-0-0:
>> ref-count 2
>> [2016-10-27 04:34:46.494616] D [logging.c:1954:_gf_msg_internal]
>> 0-logging-infra: Buffer overflow of a buffer whose size limit is 5.
>> About to flush least recently used log message to disk
>> [2016-10-27 04:34:46.493818] D [MSGID: 0]
>> [io-threads.c:351:iot_schedule] 0-testHeal4-io-threads: LOOKUP
>> scheduled as fast fop
>> [2016-10-27 04:34:46.494616] W [MSGID: 115009]
>> [server-resolve.c:574:server_resolve] 0-testHeal4-server: no
>> resolution type for (null) (LOOKUP)
>> [2016-10-27 04:34:46.494650] E [MSGID: 115050]
>> [server-rpc-fops.c:179:server_lookup_cbk] 0-testHeal4-server: 29:
>> LOOKUP (null) (00000000-0000-0000-0000-000000000000/76bc4b0f-bb18-4736-
>> 8327-99098cd0d7ce.1)
>> ==> (Invalid argument) [Invalid argument]
>> [2016-10-27 04:34:46.494720] D [client_t.c:417:gf_client_unref]
>> (-->/usr/lib64/glusterfs/3.7.16/xlator/protocol/server.so(se
>> rver_lookup_cbk+0x548)
>> [0x7efebb3864c8]
>> -->/usr/lib64/glusterfs/3.7.16/xlator/protocol/server.so(ser
>> ver_submit_reply+0x123)
>> [0x7efebb368f13] -->/lib64/libglusterfs.so.0(gf_client_unref+0x77)
>> [0x7efecfadf787] ) 0-client_t:
>> fujitsu05.dctopenstack.org-6064-2016/10/27-04:34:44:217958-
>> testHeal4-client-1-0-0:
>> ref-count 1
>> [2016-10-27 04:34:46.494936] D [client_t.c:333:gf_client_ref]
>> (-->/usr/lib64/glusterfs/3.7.16/xlator/protocol/server.so(se
>> rver3_3_lookup+0x8b)
>> [0x7efebb386beb]
>> -->/usr/lib64/glusterfs/3.7.16/xlator/protocol/server.so(get
>> _frame_from_request+0x257)
>> [0x7efebb36cfd7] -->/lib64/libglusterfs.so.0(gf_client_ref+0x68)
>> [0x7efecfadf608] ) 0-client_t:
>> fujitsu05.dctopenstack.org-6064-2016/10/27-04:34:44:217958-
>> testHeal4-client-1-0-0:
>> ref-count 2
>> [2016-10-27 04:34:46.494967] D [MSGID: 0]
>> [io-threads.c:351:iot_schedule] 0-testHeal4-io-threads: LOOKUP
>> scheduled as fast fop
>> [2016-10-27 04:34:46.495108] D [client_t.c:417:gf_client_unref]
>> (-->/usr/lib64/glusterfs/3.7.16/xlator/protocol/server.so(se
>> rver_lookup_cbk+0x548)
>> [0x7efebb3864c8]
>> -->/usr/lib64/glusterfs/3.7.16/xlator/protocol/server.so(ser
>> ver_submit_reply+0x123)
>> [0x7efebb368f13] -->/lib64/libglusterfs.so.0(gf_client_unref+0x77)
>> [0x7efecfadf787] ) 0-client_t:
>> fujitsu05.dctopenstack.org-6064-2016/10/27-04:34:44:217958-
>> testHeal4-client-1-0-0:
>> ref-count 1
>> [2016-10-27 04:34:46.595813] D [client_t.c:333:gf_client_ref]
>> (-->/usr/lib64/glusterfs/3.7.16/xlator/protocol/server.so(se
>> rver3_3_lookup+0x8b)
>> [0x7efebb386beb]
>> -->/usr/lib64/glusterfs/3.7.16/xlator/protocol/server.so(get
>> _frame_from_request+0x257)
>> [0x7efebb36cfd7] -->/lib64/libglusterfs.so.0(gf_client_ref+0x68)
>> [0x7efecfadf608] ) 0-client_t:
>> fujitsu05.dctopenstack.org-6064-2016/10/27-04:34:44:217958-
>> testHeal4-client-1-0-0:
>> ref-count 2
>> [2016-10-27 04:34:46.595915] D [client_t.c:333:gf_client_ref]
>> (-->/usr/lib64/glusterfs/3.7.16/xlator/protocol/server.so(se
>> rver3_3_lookup+0x8b)
>> [0x7efebb386beb]
>> -->/usr/lib64/glusterfs/3.7.16/xlator/protocol/server.so(get
>> _frame_from_request+0x257)
>> [0x7efebb36cfd7] -->/lib64/libglusterfs.so.0(gf_client_ref+0x68)
>> [0x7efecfadf608] ) 0-client_t:
>> fujitsu05.dctopenstack.org-6064-2016/10/27-04:34:44:217958-
>> testHeal4-client-1-0-0:
>> ref-count 3
>> [2016-10-27 04:34:46.596054] D [client_t.c:417:gf_client_unref]
>> (-->/usr/lib64/glusterfs/3.7.16/xlator/protocol/server.so(se
>> rver_lookup_cbk+0x548)
>> [0x7efebb3864c8]
>> -->/usr/lib64/glusterfs/3.7.16/xlator/protocol/server.so(ser
>> ver_submit_reply+0x123)
>> [0x7efebb368f13] -->/lib64/libglusterfs.so.0(gf_client_unref+0x77)
>> [0x7efecfadf787] ) 0-client_t:
>> fujitsu05.dctopenstack.org-6064-2016/10/27-04:34:44:217958-
>> testHeal4-client-1-0-0:
>> ref-count 2
>> [2016-10-27 04:34:46.596162] D [client_t.c:417:gf_client_unref]
>> (-->/usr/lib64/glusterfs/3.7.16/xlator/protocol/server.so(se
>> rver_lookup_cbk+0x548)
>> [0x7efebb3864c8]
>> -->/usr/lib64/glusterfs/3.7.16/xlator/protocol/server.so(ser
>> ver_submit_reply+0x123)
>> [0x7efebb368f13] -->/lib64/libglusterfs.so.0(gf_client_unref+0x77)
>> [0x7efecfadf787] ) 0-client_t:
>> fujitsu05.dctopenstack.org-6064-2016/10/27-04:34:44:217958-
>> testHeal4-client-1-0-0:
>> ref-count 1
>> [2016-10-27 04:34:46.596427] D [client_t.c:333:gf_client_ref]
>> (-->/usr/lib64/glusterfs/3.7.16/xlator/protocol/server.so(se
>> rver3_3_entrylk+0x93)
>> [0x7efebb37d633]
>> -->/usr/lib64/glusterfs/3.7.16/xlator/protocol/server.so(get
>> _frame_from_request+0x257)
>> [0x7efebb36cfd7] -->/lib64/libglusterfs.so.0(gf_client_ref+0x68)
>> [0x7efecfadf608] ) 0-client_t:
>> fujitsu05.dctopenstack.org-6064-2016/10/27-04:34:44:217958-
>> testHeal4-client-1-0-0:
>> ref-count 2
>> [2016-10-27 04:34:46.596453] D [logging.c:1954:_gf_msg_internal]
>> 0-logging-infra: Buffer overflow of a buffer whose size limit is 5.
>> About to flush least recently used log message to disk
>> The message "D [MSGID: 0] [io-threads.c:351:iot_schedule]
>> 0-testHeal4-io-threads: LOOKUP scheduled as fast fop" repeated 2 times
>> between [2016-10-27 04:34:46.494967] and [2016-10-27 04:34:46.595944]
>> [2016-10-27 04:34:46.596453] D [MSGID: 0]
>> [io-threads.c:351:iot_schedule] 0-testHeal4-io-threads: ENTRYLK
>> scheduled as normal fop
>> [2016-10-27 04:34:46.596551] D [client_t.c:333:gf_client_ref]
>> (-->/usr/lib64/glusterfs/3.7.16/xlator/protocol/server.so(se
>> rver3_3_entrylk+0x93)
>> [0x7efebb37d633]
>> -->/usr/lib64/glusterfs/3.7.16/xlator/protocol/server.so(get
>> _frame_from_request+0x257)
>> [0x7efebb36cfd7] -->/lib64/libglusterfs.so.0(gf_client_ref+0x68)
>> [0x7efecfadf608] ) 0-client_t:
>> fujitsu05.dctopenstack.org-6064-2016/10/27-04:34:44:217958-
>> testHeal4-client-1-0-0:
>> ref-count 3
>> [2016-10-27 04:34:46.596603] D [logging.c:1954:_gf_msg_internal]
>> 0-logging-infra: Buffer overflow of a buffer whose size limit is 5.
>> About to flush least recently used log message to disk
>> [2016-10-27 04:34:46.596611] D [client_t.c:417:gf_client_unref]
>> (-->/usr/lib64/glusterfs/3.7.16/xlator/protocol/server.so(se
>> rver_entrylk_cbk+0xa6)
>> [0x7efebb3713a6]
>> -->/usr/lib64/glusterfs/3.7.16/xlator/protocol/server.so(ser
>> ver_submit_reply+0x123)
>> [0x7efebb368f13] -->/lib64/libglusterfs.so.0(gf_client_unref+0x77)
>> [0x7efecfadf787] ) 0-client_t:
>> fujitsu05.dctopenstack.org-6064-2016/10/27-04:34:44:217958-
>> testHeal4-client-1-0-0:
>> ref-count 2
>> [2016-10-27 04:34:46.596570] D [MSGID: 0]
>> [io-threads.c:351:iot_schedule] 0-testHeal4-io-threads: ENTRYLK
>> scheduled as normal fop
>> [2016-10-27 04:34:46.596602] D [MSGID: 0]
>> [entrylk.c:701:pl_common_entrylk] 0-stack-trace: stack-address:
>> 0x7efecd3db738, testHeal4-locks returned -1 error: Resource
>> temporarily unavailable [Resource temporarily unavailable]
>> [2016-10-27 04:34:46.596650] D [MSGID: 0]
>> [defaults.c:1196:default_entrylk_cbk] 0-stack-trace: stack-address:
>> 0x7efecd3db738, testHeal4-io-threads returned -1 error: Resource
>> temporarily unavailable [Resource temporarily unavailable]
>> [2016-10-27 04:34:46.596664] D [MSGID: 0]
>> [io-stats.c:1811:io_stats_entrylk_cbk] 0-stack-trace: stack-address:
>> 0x7efecd3db738, /mnt/sdc_mssd/testHeal4 returned -1 error: Resource
>> temporarily unavailable [Resource temporarily unavailable]
>> [2016-10-27 04:34:46.596676] D [MSGID: 115054]
>> [server-rpc-fops.c:350:server_entrylk_cbk] 0-testHeal4-server: 34:
>> ENTRYLK /.shard (be318638-e8a0-4c6d-977d-7a937aa84806) ==> (Resource
>> temporarily unavailable) [Resource temporarily unavailable]
>> [2016-10-27 04:34:46.596764] D [client_t.c:417:gf_client_unref]
>> (-->/usr/lib64/glusterfs/3.7.16/xlator/protocol/server.so(se
>> rver_entrylk_cbk+0xa6)
>> [0x7efebb3713a6]
>> -->/usr/lib64/glusterfs/3.7.16/xlator/protocol/server.so(ser
>> ver_submit_reply+0x123)
>> [0x7efebb368f13] -->/lib64/libglusterfs.so.0(gf_client_unref+0x77)
>> [0x7efecfadf787] ) 0-client_t:
>> fujitsu05.dctopenstack.org-6064-2016/10/27-04:34:44:217958-
>> testHeal4-client-1-0-0:
>> ref-count 1
>> [2016-10-27 04:34:46.596791] D [client_t.c:333:gf_client_ref]
>> (-->/usr/lib64/glusterfs/3.7.16/xlator/protocol/server.so(se
>> rver3_3_mknod+0x80)
>> [0x7efebb37b690]
>> -->/usr/lib64/glusterfs/3.7.16/xlator/protocol/server.so(get
>> _frame_from_request+0x257)
>> [0x7efebb36cfd7] -->/lib64/libglusterfs.so.0(gf_client_ref+0x68)
>> [0x7efecfadf608] ) 0-client_t:
>> fujitsu05.dctopenstack.org-6064-2016/10/27-04:34:44:217958-
>> testHeal4-client-1-0-0:
>> ref-count 2
>> [2016-10-27 04:34:46.596813] D [MSGID: 0]
>> [server-resolve.c:330:resolve_entry_simple] 0-testHeal4-server: inode
>> (pointer: 0x7efea1aebaa0 gfid:041e3b34-14c2-4bb1-82e2-db352232c3cf
>> found for path ((null)) while type is RESOLVE_NOT
>> [2016-10-27 04:34:46.596828] D [MSGID: 115057]
>> [server-rpc-fops.c:563:server_mknod_cbk] 0-testHeal4-server: 35: MKNOD
>> (null) (be318638-e8a0-4c6d-977d-7a937aa84806/76bc4b0f-bb18-4736-
>> 8327-99098cd0d7ce.1)
>> ==> (File exists) [File exists]
>> [2016-10-27 04:34:46.596896] D [client_t.c:417:gf_client_unref]
>> (-->/usr/lib64/glusterfs/3.7.16/xlator/protocol/server.so(se
>> rver_mknod_cbk+0x5ad)
>> [0x7efebb383c9d]
>> -->/usr/lib64/glusterfs/3.7.16/xlator/protocol/server.so(ser
>> ver_submit_reply+0x123)
>> [0x7efebb368f13] -->/lib64/libglusterfs.so.0(gf_client_unref+0x77)
>> [0x7efecfadf787] ) 0-client_t:
>> fujitsu05.dctopenstack.org-6064-2016/10/27-04:34:44:217958-
>> testHeal4-client-1-0-0:
>> ref-count 1
>> [2016-10-27 04:34:46.597174] D [client_t.c:333:gf_client_ref]
>> (-->/usr/lib64/glusterfs/3.7.16/xlator/protocol/server.so(se
>> rver3_3_entrylk+0x93)
>> [0x7efebb37d633]
>> -->/usr/lib64/glusterfs/3.7.16/xlator/protocol/server.so(get
>> _frame_from_request+0x257)
>> [0x7efebb36cfd7] -->/lib64/libglusterfs.so.0(gf_client_ref+0x68)
>> [0x7efecfadf608] ) 0-client_t:
>> fujitsu05.dctopenstack.org-6064-2016/10/27-04:34:44:217958-
>> testHeal4-client-1-0-0:
>> ref-count 2
>> [2016-10-27 04:34:46.597199] D [MSGID: 0]
>> [io-threads.c:351:iot_schedule] 0-testHeal4-io-threads: ENTRYLK
>> scheduled as normal fop
>> [2016-10-27 04:34:46.597289] D [client_t.c:417:gf_client_unref]
>> (-->/usr/lib64/glusterfs/3.7.16/xlator/protocol/server.so(se
>> rver_entrylk_cbk+0xa6)
>> [0x7efebb3713a6]
>> -->/usr/lib64/glusterfs/3.7.16/xlator/protocol/server.so(ser
>> ver_submit_reply+0x123)
>> [0x7efebb368f13] -->/lib64/libglusterfs.so.0(gf_client_unref+0x77)
>> [0x7efecfadf787] ) 0-client_t:
>> fujitsu05.dctopenstack.org-6064-2016/10/27-04:34:44:217958-
>> testHeal4-client-1-0-0:
>> ref-count 1
>> [2016-10-27 04:34:46.597396] D [client_t.c:333:gf_client_ref]
>> (-->/usr/lib64/glusterfs/3.7.16/xlator/protocol/server.so(se
>> rver3_3_entrylk+0x93)
>> [0x7efebb37d633]
>> -->/usr/lib64/glusterfs/3.7.16/xlator/protocol/server.so(get
>> _frame_from_request+0x257)
>> [0x7efebb36cfd7] -->/lib64/libglusterfs.so.0(gf_client_ref+0x68)
>> [0x7efecfadf608] ) 0-client_t:
>> fujitsu05.dctopenstack.org-6064-2016/10/27-04:34:44:217958-
>> testHeal4-client-1-0-0:
>> ref-count 2
>> [2016-10-27 04:34:46.597571] D [client_t.c:417:gf_client_unref]
>> (-->/usr/lib64/glusterfs/3.7.16/xlator/protocol/server.so(se
>> rver_entrylk_cbk+0xa6)
>> [0x7efebb3713a6]
>> -->/usr/lib64/glusterfs/3.7.16/xlator/protocol/server.so(ser
>> ver_submit_reply+0x123)
>> [0x7efebb368f13] -->/lib64/libglusterfs.so.0(gf_client_unref+0x77)
>> [0x7efecfadf787] ) 0-client_t:
>> fujitsu05.dctopenstack.org-6064-2016/10/27-04:34:44:217958-
>> testHeal4-client-1-0-0:
>> ref-count 1
>> [2016-10-27 04:34:46.597604] D [client_t.c:333:gf_client_ref]
>> (-->/usr/lib64/glusterfs/3.7.16/xlator/protocol/server.so(se
>> rver3_3_lookup+0x8b)
>> [0x7efebb386beb]
>> -->/usr/lib64/glusterfs/3.7.16/xlator/protocol/server.so(get
>> _frame_from_request+0x257)
>> [0x7efebb36cfd7] -->/lib64/libglusterfs.so.0(gf_client_ref+0x68)
>> [0x7efecfadf608] ) 0-client_t:
>> fujitsu05.dctopenstack.org-6064-2016/10/27-04:34:44:217958-
>> testHeal4-client-1-0-0:
>> ref-count 2
>> [2016-10-27 04:34:46.597632] D [logging.c:1954:_gf_msg_internal]
>> 0-logging-infra: Buffer overflow of a buffer whose size limit is 5.
>> About to flush least recently used log message to disk
>> [2016-10-27 04:34:46.597415] D [MSGID: 0]
>> [io-threads.c:351:iot_schedule] 0-testHeal4-io-threads: ENTRYLK
>> scheduled as normal fop
>> [2016-10-27 04:34:46.597632] D [MSGID: 0]
>> [io-threads.c:351:iot_schedule] 0-testHeal4-io-threads: LOOKUP
>> scheduled as fast fop
>> [2016-10-27 04:34:46.597864] D [client_t.c:417:gf_client_unref]
>> (-->/usr/lib64/glusterfs/3.7.16/xlator/protocol/server.so(se
>> rver_lookup_cbk+0x548)
>> [0x7efebb3864c8]
>> -->/usr/lib64/glusterfs/3.7.16/xlator/protocol/server.so(ser
>> ver_submit_reply+0x123)
>> [0x7efebb368f13] -->/lib64/libglusterfs.so.0(gf_client_unref+0x77)
>> [0x7efecfadf787] ) 0-client_t:
>> fujitsu05.dctopenstack.org-6064-2016/10/27-04:34:44:217958-
>> testHeal4-client-1-0-0:
>> ref-count 1
>> [2016-10-27 04:34:46.598116] D [client_t.c:333:gf_client_ref]
>> (-->/usr/lib64/glusterfs/3.7.16/xlator/protocol/server.so(se
>> rver3_3_mknod+0x80)
>> [0x7efebb37b690]
>> -->/usr/lib64/glusterfs/3.7.16/xlator/protocol/server.so(get
>> _frame_from_request+0x257)
>> [0x7efebb36cfd7] -->/lib64/libglusterfs.so.0(gf_client_ref+0x68)
>> [0x7efecfadf608] ) 0-client_t:
>> fujitsu05.dctopenstack.org-6064-2016/10/27-04:34:44:217958-
>> testHeal4-client-1-0-0:
>> ref-count 2
>> [2016-10-27 04:34:46.598134] D [MSGID: 0]
>> [server-resolve.c:330:resolve_entry_simple] 0-testHeal4-server: inode
>> (pointer: 0x7efea1aebaa0 gfid:041e3b34-14c2-4bb1-82e2-db352232c3cf
>> found for path ((null)) while type is RESOLVE_NOT
>> [2016-10-27 04:34:46.598147] D [MSGID: 115057]
>> [server-rpc-fops.c:563:server_mknod_cbk] 0-testHeal4-server: 39: MKNOD
>> (null) (be318638-e8a0-4c6d-977d-7a937aa84806/76bc4b0f-bb18-4736-
>> 8327-99098cd0d7ce.1)
>> ==> (File exists) [File exists]
>> [2016-10-27 04:34:46.598205] D [client_t.c:417:gf_client_unref]
>> (-->/usr/lib64/glusterfs/3.7.16/xlator/protocol/server.so(se
>> rver_mknod_cbk+0x5ad)
>> [0x7efebb383c9d]
>> -->/usr/lib64/glusterfs/3.7.16/xlator/protocol/server.so(ser
>> ver_submit_reply+0x123)
>> [0x7efebb368f13] -->/lib64/libglusterfs.so.0(gf_client_unref+0x77)
>> [0x7efecfadf787] ) 0-client_t:
>> fujitsu05.dctopenstack.org-6064-2016/10/27-04:34:44:217958-
>> testHeal4-client-1-0-0:
>> ref-count 1
>> [2016-10-27 04:34:46.598258] D [client_t.c:333:gf_client_ref]
>> (-->/usr/lib64/glusterfs/3.7.16/xlator/protocol/server.so(se
>> rver3_3_lookup+0x8b)
>> [0x7efebb386beb]
>> -->/usr/lib64/glusterfs/3.7.16/xlator/protocol/server.so(get
>> _frame_from_request+0x257)
>> [0x7efebb36cfd7] -->/lib64/libglusterfs.so.0(gf_client_ref+0x68)
>> [0x7efecfadf608] ) 0-client_t:
>> fujitsu05.dctopenstack.org-6064-2016/10/27-04:34:44:217958-
>> testHeal4-client-1-0-0:
>> ref-count 2
>> [2016-10-27 04:34:46.598301] D [MSGID: 0]
>> [io-threads.c:351:iot_schedule] 0-testHeal4-io-threads: LOOKUP
>> scheduled as fast fop
>> [2016-10-27 04:34:46.598580] D [client_t.c:417:gf_client_unref]
>> (-->/usr/lib64/glusterfs/3.7.16/xlator/protocol/server.so(se
>> rver_lookup_cbk+0x548)
>> [0x7efebb3864c8]
>> -->/usr/lib64/glusterfs/3.7.16/xlator/protocol/server.so(ser
>> ver_submit_reply+0x123)
>> [0x7efebb368f13] -->/lib64/libglusterfs.so.0(gf_client_unref+0x77)
>> [0x7efecfadf787] ) 0-client_t:
>> fujitsu05.dctopenstack.org-6064-2016/10/27-04:34:44:217958-
>> testHeal4-client-1-0-0:
>> ref-count 1
>> [2016-10-27 04:34:46.598599] D [client_t.c:333:gf_client_ref]
>> (-->/usr/lib64/glusterfs/3.7.16/xlator/protocol/server.so(se
>> rver3_3_entrylk+0x93)
>> [0x7efebb37d633]
>> -->/usr/lib64/glusterfs/3.7.16/xlator/protocol/server.so(get
>> _frame_from_request+0x257)
>> [0x7efebb36cfd7] -->/lib64/libglusterfs.so.0(gf_client_ref+0x68)
>> [0x7efecfadf608] ) 0-client_t:
>> fujitsu05.dctopenstack.org-6064-2016/10/27-04:34:44:217958-
>> testHeal4-client-1-0-0:
>> ref-count 2
>> [2016-10-27 04:34:46.598619] D [MSGID: 0]
>> [io-threads.c:351:iot_schedule] 0-testHeal4-io-threads: ENTRYLK
>> scheduled as normal fop
>> [2016-10-27 04:34:46.598754] D [client_t.c:417:gf_client_unref]
>> (-->/usr/lib64/glusterfs/3.7.16/xlator/protocol/server.so(se
>> rver_entrylk_cbk+0xa6)
>> [0x7efebb3713a6]
>> -->/usr/lib64/glusterfs/3.7.16/xlator/protocol/server.so(ser
>> ver_submit_reply+0x123)
>> [0x7efebb368f13] -->/lib64/libglusterfs.so.0(gf_client_unref+0x77)
>> [0x7efecfadf787] ) 0-client_t:
>> fujitsu05.dctopenstack.org-6064-2016/10/27-04:34:44:217958-
>> testHeal4-client-1-0-0:
>> ref-count 1
>> [2016-10-27 04:34:46.598921] D [client_t.c:333:gf_client_ref]
>> (-->/usr/lib64/glusterfs/3.7.16/xlator/protocol/server.so(se
>> rver3_3_lookup+0x8b)
>> [0x7efebb386beb]
>> -->/usr/lib64/glusterfs/3.7.16/xlator/protocol/server.so(get
>> _frame_from_request+0x257)
>> [0x7efebb36cfd7] -->/lib64/libglusterfs.so.0(gf_client_ref+0x68)
>> [0x7efecfadf608] ) 0-client_t:
>> fujitsu05.dctopenstack.org-6064-2016/10/27-04:34:44:217958-
>> testHeal4-client-1-0-0:
>> ref-count 2
>> [2016-10-27 04:34:46.598938] W [MSGID: 115009]
>> [server-resolve.c:574:server_resolve] 0-testHeal4-server: no
>> resolution type for (null) (LOOKUP)
>> [2016-10-27 04:34:46.598951] E [MSGID: 115050]
>> [server-rpc-fops.c:179:server_lookup_cbk] 0-testHeal4-server: 42:
>> LOOKUP (null) (00000000-0000-0000-0000-000000000000/76bc4b0f-bb18-4736-
>> 8327-99098cd0d7ce.1)
>> ==> (Invalid argument) [Invalid argument]
>> [2016-10-27 04:34:46.599007] D [client_t.c:417:gf_client_unref]
>> (-->/usr/lib64/glusterfs/3.7.16/xlator/protocol/server.so(se
>> rver_lookup_cbk+0x548)
>> [0x7efebb3864c8]
>> -->/usr/lib64/glusterfs/3.7.16/xlator/protocol/server.so(ser
>> ver_submit_reply+0x123)
>> [0x7efebb368f13] -->/lib64/libglusterfs.so.0(gf_client_unref+0x77)
>> [0x7efecfadf787] ) 0-client_t:
>> fujitsu05.dctopenstack.org-6064-2016/10/27-04:34:44:217958-
>> testHeal4-client-1-0-0:
>> ref-count 1
>> [2016-10-27 04:34:46.599059] D [client_t.c:333:gf_client_ref]
>> (-->/usr/lib64/glusterfs/3.7.16/xlator/protocol/server.so(se
>> rver3_3_lookup+0x8b)
>> [0x7efebb386beb]
>> -->/usr/lib64/glusterfs/3.7.16/xlator/protocol/server.so(get
>> _frame_from_request+0x257)
>> [0x7efebb36cfd7] -->/lib64/libglusterfs.so.0(gf_client_ref+0x68)
>> [0x7efecfadf608] ) 0-client_t:
>> fujitsu05.dctopenstack.org-6064-2016/10/27-04:34:44:217958-
>> testHeal4-client-1-0-0:
>> ref-count 2
>> [2016-10-27 04:34:46.599081] D [MSGID: 0]
>> [io-threads.c:351:iot_schedule] 0-testHeal4-io-threads: LOOKUP
>> scheduled as fast fop
>> [2016-10-27 04:34:46.599215] D [client_t.c:333:gf_client_ref]
>> (-->/usr/lib64/glusterfs/3.7.16/xlator/protocol/server.so(se
>> rver3_3_lookup+0x8b)
>> [0x7efebb386beb]
>> -->/usr/lib64/glusterfs/3.7.16/xlator/protocol/server.so(get
>> _frame_from_request+0x257)
>> [0x7efebb36cfd7] -->/lib64/libglusterfs.so.0(gf_client_ref+0x68)
>> [0x7efecfadf608] ) 0-client_t:
>> fujitsu05.dctopenstack.org-6064-2016/10/27-04:34:44:217958-
>> testHeal4-client-1-0-0:
>> ref-count 3
>> [2016-10-27 04:34:46.599379] D [client_t.c:417:gf_client_unref]
>> (-->/usr/lib64/glusterfs/3.7.16/xlator/protocol/server.so(se
>> rver_lookup_cbk+0x548)
>> [0x7efebb3864c8]
>> -->/usr/lib64/glusterfs/3.7.16/xlator/protocol/server.so(ser
>> ver_submit_reply+0x123)
>> [0x7efebb368f13] -->/lib64/libglusterfs.so.0(gf_client_unref+0x77)
>> [0x7efecfadf787] ) 0-client_t:
>> fujitsu05.dctopenstack.org-6064-2016/10/27-04:34:44:217958-
>> testHeal4-client-1-0-0:
>> ref-count 2
>> [2016-10-27 04:34:46.599412] D [client_t.c:333:gf_client_ref]
>> (-->/usr/lib64/glusterfs/3.7.16/xlator/protocol/server.so(se
>> rver3_3_lookup+0x8b)
>> [0x7efebb386beb]
>> -->/usr/lib64/glusterfs/3.7.16/xlator/protocol/server.so(get
>> _frame_from_request+0x257)
>> [0x7efebb36cfd7] -->/lib64/libglusterfs.so.0(gf_client_ref+0x68)
>> [0x7efecfadf608] ) 0-client_t:
>> fujitsu05.dctopenstack.org-6064-2016/10/27-04:34:44:217958-
>> testHeal4-client-1-0-0:
>> ref-count 3
>> [2016-10-27 04:34:46.599505] D [client_t.c:417:gf_client_unref]
>> (-->/usr/lib64/glusterfs/3.7.16/xlator/protocol/server.so(se
>> rver_lookup_cbk+0x548)
>> [0x7efebb3864c8]
>> -->/usr/lib64/glusterfs/3.7.16/xlator/protocol/server.so(ser
>> ver_submit_reply+0x123)
>> [0x7efebb368f13] -->/lib64/libglusterfs.so.0(gf_client_unref+0x77)
>> [0x7efecfadf787] ) 0-client_t:
>> fujitsu05.dctopenstack.org-6064-2016/10/27-04:34:44:217958-
>> testHeal4-client-1-0-0:
>> ref-count 2
>> [2016-10-27 04:34:46.599584] D [client_t.c:417:gf_client_unref]
>> (-->/usr/lib64/glusterfs/3.7.16/xlator/protocol/server.so(se
>> rver_lookup_cbk+0x548)
>> [0x7efebb3864c8]
>> -->/usr/lib64/glusterfs/3.7.16/xlator/protocol/server.so(ser
>> ver_submit_reply+0x123)
>> [0x7efebb368f13] -->/lib64/libglusterfs.so.0(gf_client_unref+0x77)
>> [0x7efecfadf787] ) 0-client_t:
>> fujitsu05.dctopenstack.org-6064-2016/10/27-04:34:44:217958-
>> testHeal4-client-1-0-0:
>> ref-count 1
>> [2016-10-27 04:34:46.599783] D [client_t.c:333:gf_client_ref]
>> (-->/usr/lib64/glusterfs/3.7.16/xlator/protocol/server.so(se
>> rver3_3_lookup+0x8b)
>> [0x7efebb386beb]
>> -->/usr/lib64/glusterfs/3.7.16/xlator/protocol/server.so(get
>> _frame_from_request+0x257)
>> [0x7efebb36cfd7] -->/lib64/libglusterfs.so.0(gf_client_ref+0x68)
>> [0x7efecfadf608] ) 0-client_t:
>> fujitsu05.dctopenstack.org-6064-2016/10/27-04:34:44:217958-
>> testHeal4-client-1-0-0:
>> ref-count 2
>> [2016-10-27 04:34:46.599807] D [logging.c:1954:_gf_msg_internal]
>> 0-logging-infra: Buffer overflow of a buffer whose size limit is 5.
>> About to flush least recently used log message to disk
>> The message "D [MSGID: 0] [io-threads.c:351:iot_schedule]
>> 0-testHeal4-io-threads: LOOKUP scheduled as fast fop" repeated 2 times
>> between [2016-10-27 04:34:46.599081] and [2016-10-27 04:34:46.599439]
>> [2016-10-27 04:34:46.599806] W [MSGID: 115009]
>> [server-resolve.c:574:server_resolve] 0-testHeal4-server: no
>> resolution type for (null) (LOOKUP)
>> [2016-10-27 04:34:46.599833] E [MSGID: 115050]
>> [server-rpc-fops.c:179:server_lookup_cbk] 0-testHeal4-server: 46:
>> LOOKUP (null) (00000000-0000-0000-0000-000000000000/76bc4b0f-bb18-4736-
>> 8327-99098cd0d7ce.1)
>> ==> (Invalid argument) [Invalid argument]
>> [2016-10-27 04:34:46.599893] D [client_t.c:417:gf_client_unref]
>> (-->/usr/lib64/glusterfs/3.7.16/xlator/protocol/server.so(se
>> rver_lookup_cbk+0x548)
>> [0x7efebb3864c8]
>> -->/usr/lib64/glusterfs/3.7.16/xlator/protocol/server.so(ser
>> ver_submit_reply+0x123)
>> [0x7efebb368f13] -->/lib64/libglusterfs.so.0(gf_client_unref+0x77)
>> [0x7efecfadf787] ) 0-client_t:
>> fujitsu05.dctopenstack.org-6064-2016/10/27-04:34:44:217958-
>> testHeal4-client-1-0-0:
>> ref-count 1
>> [2016-10-27 04:34:46.600183] D [client_t.c:333:gf_client_ref]
>> (-->/usr/lib64/glusterfs/3.7.16/xlator/protocol/server.so(se
>> rver3_3_lookup+0x8b)
>> [0x7efebb386beb]
>> -->/usr/lib64/glusterfs/3.7.16/xlator/protocol/server.so(get
>> _frame_from_request+0x257)
>> [0x7efebb36cfd7] -->/lib64/libglusterfs.so.0(gf_client_ref+0x68)
>> [0x7efecfadf608] ) 0-client_t:
>> fujitsu05.dctopenstack.org-6064-2016/10/27-04:34:44:217958-
>> testHeal4-client-1-0-0:
>> ref-count 2
>> [2016-10-27 04:34:46.600206] D [MSGID: 0]
>> [io-threads.c:351:iot_schedule] 0-testHeal4-io-threads: LOOKUP
>> scheduled as fast fop
>> [2016-10-27 04:34:46.600336] D [client_t.c:417:gf_client_unref]
>> (-->/usr/lib64/glusterfs/3.7.16/xlator/protocol/server.so(se
>> rver_lookup_cbk+0x548)
>> [0x7efebb3864c8]
>> -->/usr/lib64/glusterfs/3.7.16/xlator/protocol/server.so(ser
>> ver_submit_reply+0x123)
>> [0x7efebb368f13] -->/lib64/libglusterfs.so.0(gf_client_unref+0x77)
>> [0x7efecfadf787] ) 0-client_t:
>> fujitsu05.dctopenstack.org-6064-2016/10/27-04:34:44:217958-
>> testHeal4-client-1-0-0:
>> ref-count 1
>> [2016-10-27 04:34:59.343124] D
>> [logging.c:1830:gf_log_flush_timeout_cbk] 0-logging-infra: Log timer
>> timed out. About to flush outstanding messages if present
>> [2016-10-27 04:34:59.343202] D
>> [logging.c:1792:__gf_log_inject_timer_event] 0-logging-infra: Starting
>> timer now. Timeout = 120, current buf size = 5
>>
>>
>> Thanks.
>>
>> Regards,
>>
>> Cwtan
>>
>> On Wed, Oct 26, 2016 at 8:09 PM, Krutika Dhananjay <kdhananj at redhat.com>
>> wrote:
>> > Do you also have the brick logs? Looks like the bricks are returning
>> EINVAL
>> > on lookup
>> > which AFR is subsequently converting into an EIO. And sharding is merely
>> > delivering the same error code upwards.
>> >
>> > -Krutika
>> >
>> > On Wed, Oct 26, 2016 at 6:38 AM, qingwei wei <tchengwee at gmail.com>
>> wrote:
>> >>
>> >> Hi,
>> >>
>> >> Pls see the client log below.
>> >>
>> >> [2016-10-24 10:29:51.111603] I [fuse-bridge.c:5171:fuse_graph_setup]
>> >> 0-fuse: switched to graph 0
>> >> [2016-10-24 10:29:51.111662] I [MSGID: 114035]
>> >> [client-handshake.c:193:client_set_lk_version_cbk]
>> >> 0-testHeal-client-2: Server lk version = 1
>> >> [2016-10-24 10:29:51.112371] I [fuse-bridge.c:4083:fuse_init]
>> >> 0-glusterfs-fuse: FUSE inited with protocol versions: glusterfs 7.22
>> >> kernel 7.22
>> >> [2016-10-24 10:29:51.113563] I [MSGID: 108031]
>> >> [afr-common.c:2071:afr_local_discovery_cbk] 0-testHeal-replicate-0:
>> >> selecting local read_child testHeal-client-2
>> >> [2016-10-24 10:29:51.113604] I [MSGID: 108031]
>> >> [afr-common.c:2071:afr_local_discovery_cbk] 0-testHeal-replicate-0:
>> >> selecting local read_child testHeal-client-0
>> >> [2016-10-24 10:29:51.113630] I [MSGID: 108031]
>> >> [afr-common.c:2071:afr_local_discovery_cbk] 0-testHeal-replicate-0:
>> >> selecting local read_child testHeal-client-1
>> >> [2016-10-24 10:29:54.016802] W [MSGID: 108001]
>> >> [afr-transaction.c:789:afr_handle_quorum] 0-testHeal-replicate-0:
>> >> /.shard/9061198a-eb7e-45a2-93fb-eb396d1b2727.1: F
>> >> ailing MKNOD as quorum is not met
>> >> [2016-10-24 10:29:54.019330] W [MSGID: 114031]
>> >> [client-rpc-fops.c:2981:client3_3_lookup_cbk] 0-testHeal-client-0:
>> >> remote operation failed. Path: (null) (00000000-
>> >> 0000-0000-0000-000000000000) [Invalid argument]
>> >> [2016-10-24 10:29:54.019343] W [MSGID: 114031]
>> >> [client-rpc-fops.c:2981:client3_3_lookup_cbk] 0-testHeal-client-2:
>> >> remote operation failed. Path: (null) (00000000-
>> >> 0000-0000-0000-000000000000) [Invalid argument]
>> >> [2016-10-24 10:29:54.019373] W [MSGID: 114031]
>> >> [client-rpc-fops.c:2981:client3_3_lookup_cbk] 0-testHeal-client-1:
>> >> remote operation failed. Path: (null) (00000000-
>> >> 0000-0000-0000-000000000000) [Invalid argument]
>> >> [2016-10-24 10:29:54.019854] E [MSGID: 133010]
>> >> [shard.c:1582:shard_common_lookup_shards_cbk] 0-testHeal-shard: Lookup
>> >> on shard 1 failed. Base file gfid = 9061198a
>> >> -eb7e-45a2-93fb-eb396d1b2727 [Input/output error]
>> >> [2016-10-24 10:29:54.020886] W [fuse-bridge.c:2227:fuse_readv_cbk]
>> >> 0-glusterfs-fuse: 135: READ => -1
>> >> gfid=9061198a-eb7e-45a2-93fb-eb396d1b2727 fd=0x7f70c80d12dc (
>> >> Input/output error)
>> >> [2016-10-24 10:29:54.118264] W [MSGID: 114031]
>> >> [client-rpc-fops.c:2981:client3_3_lookup_cbk] 0-testHeal-client-0:
>> >> remote operation failed. Path: (null) (00000000-
>> >> 0000-0000-0000-000000000000) [Invalid argument]
>> >> [2016-10-24 10:29:54.118308] W [MSGID: 114031]
>> >> [client-rpc-fops.c:2981:client3_3_lookup_cbk] 0-testHeal-client-2:
>> >> remote operation failed. Path: (null)
>> >> (00000000-0000-0000-0000-000000000000) [Invalid argument]
>> >> [2016-10-24 10:29:54.118329] W [MSGID: 114031]
>> >> [client-rpc-fops.c:2981:client3_3_lookup_cbk] 0-testHeal-client-1:
>> >> remote operation failed. Path: (null)
>> >> (00000000-0000-0000-0000-000000000000) [Invalid argument]
>> >> [2016-10-24 10:29:54.118751] E [MSGID: 133010]
>> >> [shard.c:1582:shard_common_lookup_shards_cbk] 0-testHeal-shard: Lookup
>> >> on shard 1 failed. Base file gfid =
>> >> 9061198a-eb7e-45a2-93fb-eb396d1b2727 [Input/output error]
>> >> [2016-10-24 10:29:54.118787] W [fuse-bridge.c:2227:fuse_readv_cbk]
>> >> 0-glusterfs-fuse: 137: READ => -1
>> >> gfid=9061198a-eb7e-45a2-93fb-eb396d1b2727 fd=0x7f70c80d12dc
>> >> (Input/output error)
>> >> [2016-10-24 10:29:54.119330] W [MSGID: 114031]
>> >> [client-rpc-fops.c:2981:client3_3_lookup_cbk] 0-testHeal-client-1:
>> >> remote operation failed. Path: (null)
>> >> (00000000-0000-0000-0000-000000000000) [Invalid argument]
>> >> [2016-10-24 10:29:54.119338] W [MSGID: 114031]
>> >> [client-rpc-fops.c:2981:client3_3_lookup_cbk] 0-testHeal-client-0:
>> >> remote operation failed. Path: (null)
>> >> (00000000-0000-0000-0000-000000000000) [Invalid argument]
>> >> [2016-10-24 10:29:54.119368] W [MSGID: 114031]
>> >> [client-rpc-fops.c:2981:client3_3_lookup_cbk] 0-testHeal-client-2:
>> >> remote operation failed. Path: (null)
>> >> (00000000-0000-0000-0000-000000000000) [Invalid argument]
>> >> [2016-10-24 10:29:54.119674] E [MSGID: 133010]
>> >> [shard.c:1582:shard_common_lookup_shards_cbk] 0-testHeal-shard: Lookup
>> >> on shard 1 failed. Base file gfid =
>> >> 9061198a-eb7e-45a2-93fb-eb396d1b2727 [Input/output error]
>> >> [2016-10-24 10:29:54.119715] W [fuse-bridge.c:2227:fuse_readv_cbk]
>> >> 0-glusterfs-fuse: 138: READ => -1
>> >> gfid=9061198a-eb7e-45a2-93fb-eb396d1b2727 fd=0x7f70c80d12dc
>> >> (Input/output error)
>> >> [2016-10-24 10:36:13.140414] W [MSGID: 114031]
>> >> [client-rpc-fops.c:2981:client3_3_lookup_cbk] 0-testHeal-client-0:
>> >> remote operation failed. Path: (null)
>> >> (00000000-0000-0000-0000-000000000000) [Invalid argument]
>> >> [2016-10-24 10:36:13.140451] W [MSGID: 114031]
>> >> [client-rpc-fops.c:2981:client3_3_lookup_cbk] 0-testHeal-client-2:
>> >> remote operation failed. Path: (null)
>> >> (00000000-0000-0000-0000-000000000000) [Invalid argument]
>> >> [2016-10-24 10:36:13.140461] W [MSGID: 114031]
>> >> [client-rpc-fops.c:2981:client3_3_lookup_cbk] 0-testHeal-client-1:
>> >> remote operation failed. Path: (null)
>> >> (00000000-0000-0000-0000-000000000000) [Invalid argument]
>> >> [2016-10-24 10:36:13.140956] E [MSGID: 133010]
>> >> [shard.c:1582:shard_common_lookup_shards_cbk] 0-testHeal-shard: Lookup
>> >> on shard 1 failed. Base file gfid =
>> >> 9061198a-eb7e-45a2-93fb-eb396d1b2727 [Input/output error]
>> >> [2016-10-24 10:36:13.140995] W [fuse-bridge.c:2227:fuse_readv_cbk]
>> >> 0-glusterfs-fuse: 145: READ => -1
>> >> gfid=9061198a-eb7e-45a2-93fb-eb396d1b2727 fd=0x7f70c80d12dc
>> >> (Input/output error)
>> >> [2016-10-25 03:22:01.220025] I [MSGID: 100011]
>> >> [glusterfsd.c:1323:reincarnate] 0-glusterfsd: Fetching the volume file
>> >> from server...
>> >> [2016-10-25 03:22:01.220938] I
>> >> [glusterfsd-mgmt.c:1600:mgmt_getspec_cbk] 0-glusterfs: No change in
>> >> volfile, continuing
>> >>
>> >> I also attached the log in this email.
>> >>
>> >> Thanks.
>> >>
>> >> Cwtan
>> >>
>> >>
>> >> On Wed, Oct 26, 2016 at 12:30 AM, Krutika Dhananjay <
>> kdhananj at redhat.com>
>> >> wrote:
>> >> > Tried it locally on my setup. Worked fine.
>> >> >
>> >> > Could you please attach the mount logs?
>> >> >
>> >> > -Krutika
>> >> >
>> >> > On Tue, Oct 25, 2016 at 6:55 PM, Pranith Kumar Karampuri
>> >> > <pkarampu at redhat.com> wrote:
>> >> >>
>> >> >> +Krutika
>> >> >>
>> >> >> On Mon, Oct 24, 2016 at 4:10 PM, qingwei wei <tchengwee at gmail.com>
>> >> >> wrote:
>> >> >>>
>> >> >>> Hi,
>> >> >>>
>> >> >>> I am currently running a simple gluster setup using one server node
>> >> >>> with multiple disks. I realize that if i delete away all the .shard
>> >> >>> files in one replica in the backend, my application (dd) will
>> report
>> >> >>> Input/Output error even though i have 3 replicas.
>> >> >>>
>> >> >>> My gluster version is 3.7.16
>> >> >>>
>> >> >>> gluster volume file
>> >> >>>
>> >> >>> Volume Name: testHeal
>> >> >>> Type: Replicate
>> >> >>> Volume ID: 26d16d7f-bc4f-44a6-a18b-eab780d80851
>> >> >>> Status: Started
>> >> >>> Number of Bricks: 1 x 3 = 3
>> >> >>> Transport-type: tcp
>> >> >>> Bricks:
>> >> >>> Brick1: 192.168.123.4:/mnt/sdb_mssd/testHeal2
>> >> >>> Brick2: 192.168.123.4:/mnt/sde_mssd/testHeal2
>> >> >>> Brick3: 192.168.123.4:/mnt/sdd_mssd/testHeal2
>> >> >>> Options Reconfigured:
>> >> >>> cluster.self-heal-daemon: on
>> >> >>> features.shard-block-size: 16MB
>> >> >>> features.shard: on
>> >> >>> performance.readdir-ahead: on
>> >> >>>
>> >> >>> dd error
>> >> >>>
>> >> >>> [root at fujitsu05 .shard]# dd of=/home/test if=/mnt/fuseMount/ddTest
>> >> >>> bs=16M count=20 oflag=direct
>> >> >>> dd: error reading ‘/mnt/fuseMount/ddTest’: Input/output error
>> >> >>> 1+0 records in
>> >> >>> 1+0 records out
>> >> >>> 16777216 bytes (17 MB) copied, 0.111038 s, 151 MB/s
>> >> >>>
>> >> >>> in the .shard folder where i deleted all the .shard file, i can see
>> >> >>> one .shard file is recreated
>> >> >>>
>> >> >>> getfattr -d -e hex -m.  9061198a-eb7e-45a2-93fb-eb396d1b2727.1
>> >> >>> # file: 9061198a-eb7e-45a2-93fb-eb396d1b2727.1
>> >> >>> trusted.afr.testHeal-client-0=0x000000010000000100000000
>> >> >>> trusted.afr.testHeal-client-2=0x000000010000000100000000
>> >> >>> trusted.gfid=0x41b653f7daa14627b1f91f9e8554ddde
>> >> >>>
>> >> >>> However, the gfid is not the same compare to the other replicas
>> >> >>>
>> >> >>> getfattr -d -e hex -m.  9061198a-eb7e-45a2-93fb-eb396d1b2727.1
>> >> >>> # file: 9061198a-eb7e-45a2-93fb-eb396d1b2727.1
>> >> >>> trusted.afr.dirty=0x000000000000000000000000
>> >> >>> trusted.afr.testHeal-client-1=0x000000000000000000000000
>> >> >>> trusted.bit-rot.version=0x0300000000000000580dde99000e5e5d
>> >> >>> trusted.gfid=0x9ee5c5eed7964a6cb9ac1a1419de5a40
>> >> >>>
>> >> >>> Is this consider a bug?
>> >> >>>
>> >> >>> Regards,
>> >> >>>
>> >> >>> Cwtan
>> >> >>> _______________________________________________
>> >> >>> Gluster-devel mailing list
>> >> >>> Gluster-devel at gluster.org
>> >> >>> http://www.gluster.org/mailman/listinfo/gluster-devel
>> >> >>
>> >> >>
>> >> >>
>> >> >>
>> >> >> --
>> >> >> Pranith
>> >> >
>> >> >
>> >
>> >
>>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.gluster.org/pipermail/gluster-devel/attachments/20161027/be53059e/attachment-0001.html>


More information about the Gluster-devel mailing list