<div>Hello Strahil,</div><div> </div><div>I tried restarting the glusterd.service on storage2 but it had no effect. What do you mean exactly with "set the node in maintenance"? Only the "ovirthostX" are available as compute hosts in oVirt. Or is that some other option in oVirt that I don't know about? The gluster volume itself is configured as a storage domain in oVirt with these options:</div><div>Storage Type: GlusterFS</div><div>Path: storage1:/hdd</div><div>VFS Type: glusterfs</div><div> </div><div>I am planning to upgrade the gluster version soon, but I would like to fix this issue first. Thanks for your support in any case.</div><div> </div><div>I have attached the brick log of brick3 on storage2 below. Today it's only showing this:</div><div><div>[2022-03-27 06:14:31.791596] E [rpc-clnt.c:183:call_bail] 0-glusterfs: bailing out frame type(GlusterFS Handshake), op(GETSPEC(2)), xid = 0x1e, unique = 0, sent = 2022-03-27 05:44:25.879160, timeout = 1800 for 172.22.102.142:24007</div><div> </div><div>In the last couple of days it has thrown these errors:</div><div> </div><div><div>[2022-03-24 04:09:15.933837] E [MSGID: 113002] [posix-entry-ops.c:323:posix_lookup] 0-hdd-posix: buf->ia_gfid is null for /data/glusterfs/hdd/brick3/brick/.shard/e5f699e2-de11-41be-bd24-e29876928f0f.9 [No data available]</div><div>[2022-03-24 04:09:15.934007] E [MSGID: 115050] [server-rpc-fops_v2.c:158:server4_lookup_cbk] 0-hdd-server: 233775258: LOOKUP /.shard/e5f699e2-de11-41be-bd24-e29876928f0f.9 (be318638-e8a0-4c6d-977d-7a937aa84806/e5f699e2-de11-41be-bd24-e29876928f0f.9), client: CTX_ID:5c068f01-30cf-44cf-a7e5-a0836312517b-GRAPH_ID:0-PID:19802-HOST:ovirthost7-PC_NAME:hdd-client-7-RECON_NO:-0, error-xlator: hdd-posix [No data available]</div><div>[2022-03-24 04:09:42.885005] E [MSGID: 113002] [posix-entry-ops.c:323:posix_lookup] 0-hdd-posix: buf->ia_gfid is null for /data/glusterfs/hdd/brick3/brick/.shard/e5f699e2-de11-41be-bd24-e29876928f0f.127 [No data available]</div><div>[2022-03-24 04:09:42.885066] E [MSGID: 115050] [server-rpc-fops_v2.c:158:server4_lookup_cbk] 0-hdd-server: 233783993: LOOKUP /.shard/e5f699e2-de11-41be-bd24-e29876928f0f.127 (be318638-e8a0-4c6d-977d-7a937aa84806/e5f699e2-de11-41be-bd24-e29876928f0f.127), client: CTX_ID:5c068f01-30cf-44cf-a7e5-a0836312517b-GRAPH_ID:0-PID:19802-HOST:ovirthost7-PC_NAME:hdd-client-7-RECON_NO:-0, error-xlator: hdd-posix [No data available]</div><div>[2022-03-24 04:09:49.757098] E [MSGID: 113002] [posix-entry-ops.c:323:posix_lookup] 0-hdd-posix: buf->ia_gfid is null for /data/glusterfs/hdd/brick3/brick/.shard/e5f699e2-de11-41be-bd24-e29876928f0f.160 [No data available]</div><div>[2022-03-24 04:09:49.757150] E [MSGID: 115050] [server-rpc-fops_v2.c:158:server4_lookup_cbk] 0-hdd-server: 233789725: LOOKUP /.shard/e5f699e2-de11-41be-bd24-e29876928f0f.160 (be318638-e8a0-4c6d-977d-7a937aa84806/e5f699e2-de11-41be-bd24-e29876928f0f.160), client: CTX_ID:5c068f01-30cf-44cf-a7e5-a0836312517b-GRAPH_ID:0-PID:19802-HOST:ovirthost7-PC_NAME:hdd-client-7-RECON_NO:-0, error-xlator: hdd-posix [No data available]</div><div>[2022-03-24 04:09:50.914836] E [MSGID: 113002] [posix-entry-ops.c:323:posix_lookup] 0-hdd-posix: buf->ia_gfid is null for /data/glusterfs/hdd/brick3/brick/.shard/e5f699e2-de11-41be-bd24-e29876928f0f.172 [No data available]</div><div>[2022-03-24 04:09:50.914885] E [MSGID: 115050] [server-rpc-fops_v2.c:158:server4_lookup_cbk] 0-hdd-server: 233790786: LOOKUP /.shard/e5f699e2-de11-41be-bd24-e29876928f0f.172 (be318638-e8a0-4c6d-977d-7a937aa84806/e5f699e2-de11-41be-bd24-e29876928f0f.172), client: CTX_ID:5c068f01-30cf-44cf-a7e5-a0836312517b-GRAPH_ID:0-PID:19802-HOST:ovirthost7-PC_NAME:hdd-client-7-RECON_NO:-0, error-xlator: hdd-posix [No data available]</div><div>[2022-03-24 04:10:13.015609] E [MSGID: 113002] [posix-entry-ops.c:323:posix_lookup] 0-hdd-posix: buf->ia_gfid is null for /data/glusterfs/hdd/brick3/brick/.shard/e5f699e2-de11-41be-bd24-e29876928f0f.239 [No data available]</div><div>[2022-03-24 04:10:13.015737] E [MSGID: 115050] [server-rpc-fops_v2.c:158:server4_lookup_cbk] 0-hdd-server: 233795641: LOOKUP /.shard/e5f699e2-de11-41be-bd24-e29876928f0f.239 (be318638-e8a0-4c6d-977d-7a937aa84806/e5f699e2-de11-41be-bd24-e29876928f0f.239), client: CTX_ID:5c068f01-30cf-44cf-a7e5-a0836312517b-GRAPH_ID:0-PID:19802-HOST:ovirthost7-PC_NAME:hdd-client-7-RECON_NO:-0, error-xlator: hdd-posix [No data available]</div><div>[2022-03-24 04:10:13.067565] E [MSGID: 113002] [posix-entry-ops.c:323:posix_lookup] 0-hdd-posix: buf->ia_gfid is null for /data/glusterfs/hdd/brick3/brick/.shard/e5f699e2-de11-41be-bd24-e29876928f0f.240 [No data available]</div><div>[2022-03-24 04:10:13.067670] E [MSGID: 115050] [server-rpc-fops_v2.c:158:server4_lookup_cbk] 0-hdd-server: 233796273: LOOKUP /.shard/e5f699e2-de11-41be-bd24-e29876928f0f.240 (be318638-e8a0-4c6d-977d-7a937aa84806/e5f699e2-de11-41be-bd24-e29876928f0f.240), client: CTX_ID:5c068f01-30cf-44cf-a7e5-a0836312517b-GRAPH_ID:0-PID:19802-HOST:ovirthost7-PC_NAME:hdd-client-7-RECON_NO:-0, error-xlator: hdd-posix [No data available]</div><div>[2022-03-24 04:10:21.584760] E [MSGID: 113002] [posix-entry-ops.c:323:posix_lookup] 0-hdd-posix: buf->ia_gfid is null for /data/glusterfs/hdd/brick3/brick/.shard/e5f699e2-de11-41be-bd24-e29876928f0f.267 [No data available]</div><div>[2022-03-24 04:10:21.584857] E [MSGID: 115050] [server-rpc-fops_v2.c:158:server4_lookup_cbk] 0-hdd-server: 233798461: LOOKUP /.shard/e5f699e2-de11-41be-bd24-e29876928f0f.267 (be318638-e8a0-4c6d-977d-7a937aa84806/e5f699e2-de11-41be-bd24-e29876928f0f.267), client: CTX_ID:5c068f01-30cf-44cf-a7e5-a0836312517b-GRAPH_ID:0-PID:19802-HOST:ovirthost7-PC_NAME:hdd-client-7-RECON_NO:-0, error-xlator: hdd-posix [No data available]</div><div>[2022-03-24 04:10:26.072542] E [MSGID: 113002] [posix-entry-ops.c:323:posix_lookup] 0-hdd-posix: buf->ia_gfid is null for /data/glusterfs/hdd/brick3/brick/.shard/e5f699e2-de11-41be-bd24-e29876928f0f.308 [No data available]</div><div>[2022-03-24 04:10:26.072652] E [MSGID: 115050] [server-rpc-fops_v2.c:158:server4_lookup_cbk] 0-hdd-server: 233802486: LOOKUP /.shard/e5f699e2-de11-41be-bd24-e29876928f0f.308 (be318638-e8a0-4c6d-977d-7a937aa84806/e5f699e2-de11-41be-bd24-e29876928f0f.308), client: CTX_ID:5c068f01-30cf-44cf-a7e5-a0836312517b-GRAPH_ID:0-PID:19802-HOST:ovirthost7-PC_NAME:hdd-client-7-RECON_NO:-0, error-xlator: hdd-posix [No data available]</div><div>[2022-03-24 04:10:29.658880] E [MSGID: 113002] [posix-entry-ops.c:323:posix_lookup] 0-hdd-posix: buf->ia_gfid is null for /data/glusterfs/hdd/brick3/brick/.shard/e5f699e2-de11-41be-bd24-e29876928f0f.339 [No data available]</div><div>[2022-03-24 04:10:29.659005] E [MSGID: 115050] [server-rpc-fops_v2.c:158:server4_lookup_cbk] 0-hdd-server: 233806374: LOOKUP /.shard/e5f699e2-de11-41be-bd24-e29876928f0f.339 (be318638-e8a0-4c6d-977d-7a937aa84806/e5f699e2-de11-41be-bd24-e29876928f0f.339), client: CTX_ID:5c068f01-30cf-44cf-a7e5-a0836312517b-GRAPH_ID:0-PID:19802-HOST:ovirthost7-PC_NAME:hdd-client-7-RECON_NO:-0, error-xlator: hdd-posix [No data available]</div><div>[2022-03-24 04:10:32.483766] E [MSGID: 113002] [posix-entry-ops.c:323:posix_lookup] 0-hdd-posix: buf->ia_gfid is null for /data/glusterfs/hdd/brick3/brick/.shard/e5f699e2-de11-41be-bd24-e29876928f0f.359 [No data available]</div><div>[2022-03-24 04:10:32.483837] E [MSGID: 115050] [server-rpc-fops_v2.c:158:server4_lookup_cbk] 0-hdd-server: 233808908: LOOKUP /.shard/e5f699e2-de11-41be-bd24-e29876928f0f.359 (be318638-e8a0-4c6d-977d-7a937aa84806/e5f699e2-de11-41be-bd24-e29876928f0f.359), client: CTX_ID:5c068f01-30cf-44cf-a7e5-a0836312517b-GRAPH_ID:0-PID:19802-HOST:ovirthost7-PC_NAME:hdd-client-7-RECON_NO:-0, error-xlator: hdd-posix [No data available]</div><div>[2022-03-24 04:10:43.969402] E [MSGID: 113002] [posix-entry-ops.c:323:posix_lookup] 0-hdd-posix: buf->ia_gfid is null for /data/glusterfs/hdd/brick3/brick/.shard/e5f699e2-de11-41be-bd24-e29876928f0f.462 [No data available]</div><div>[2022-03-24 04:10:43.969527] E [MSGID: 115050] [server-rpc-fops_v2.c:158:server4_lookup_cbk] 0-hdd-server: 233820554: LOOKUP /.shard/e5f699e2-de11-41be-bd24-e29876928f0f.462 (be318638-e8a0-4c6d-977d-7a937aa84806/e5f699e2-de11-41be-bd24-e29876928f0f.462), client: CTX_ID:5c068f01-30cf-44cf-a7e5-a0836312517b-GRAPH_ID:0-PID:19802-HOST:ovirthost7-PC_NAME:hdd-client-7-RECON_NO:-0, error-xlator: hdd-posix [No data available]</div><div>[2022-03-24 04:10:58.786275] E [MSGID: 113002] [posix-entry-ops.c:323:posix_lookup] 0-hdd-posix: buf->ia_gfid is null for /data/glusterfs/hdd/brick3/brick/.shard/e5f699e2-de11-41be-bd24-e29876928f0f.593 [No data available]</div><div>[2022-03-24 04:10:58.786346] E [MSGID: 115050] [server-rpc-fops_v2.c:158:server4_lookup_cbk] 0-hdd-server: 233832958: LOOKUP /.shard/e5f699e2-de11-41be-bd24-e29876928f0f.593 (be318638-e8a0-4c6d-977d-7a937aa84806/e5f699e2-de11-41be-bd24-e29876928f0f.593), client: CTX_ID:5c068f01-30cf-44cf-a7e5-a0836312517b-GRAPH_ID:0-PID:19802-HOST:ovirthost7-PC_NAME:hdd-client-7-RECON_NO:-0, error-xlator: hdd-posix [No data available]</div><div>[2022-03-24 04:11:04.057738] E [MSGID: 113002] [posix-entry-ops.c:323:posix_lookup] 0-hdd-posix: buf->ia_gfid is null for /data/glusterfs/hdd/brick3/brick/.shard/e5f699e2-de11-41be-bd24-e29876928f0f.638 [No data available]</div><div>[2022-03-24 04:11:04.057799] E [MSGID: 115050] [server-rpc-fops_v2.c:158:server4_lookup_cbk] 0-hdd-server: 233837390: LOOKUP /.shard/e5f699e2-de11-41be-bd24-e29876928f0f.638 (be318638-e8a0-4c6d-977d-7a937aa84806/e5f699e2-de11-41be-bd24-e29876928f0f.638), client: CTX_ID:5c068f01-30cf-44cf-a7e5-a0836312517b-GRAPH_ID:0-PID:19802-HOST:ovirthost7-PC_NAME:hdd-client-7-RECON_NO:-0, error-xlator: hdd-posix [No data available]</div><div>[2022-03-24 04:11:04.057739] E [MSGID: 113002] [posix-entry-ops.c:323:posix_lookup] 0-hdd-posix: buf->ia_gfid is null for /data/glusterfs/hdd/brick3/brick/.shard/e5f699e2-de11-41be-bd24-e29876928f0f.638 [No data available]</div><div>[2022-03-24 04:11:04.057787] E [MSGID: 115050] [server-rpc-fops_v2.c:158:server4_lookup_cbk] 0-hdd-server: 233837391: LOOKUP /.shard/e5f699e2-de11-41be-bd24-e29876928f0f.638 (be318638-e8a0-4c6d-977d-7a937aa84806/e5f699e2-de11-41be-bd24-e29876928f0f.638), client: CTX_ID:5c068f01-30cf-44cf-a7e5-a0836312517b-GRAPH_ID:0-PID:19802-HOST:ovirthost7-PC_NAME:hdd-client-7-RECON_NO:-0, error-xlator: hdd-posix [No data available]</div><div>[2022-03-24 04:11:15.417612] E [MSGID: 113002] [posix-entry-ops.c:323:posix_lookup] 0-hdd-posix: buf->ia_gfid is null for /data/glusterfs/hdd/brick3/brick/.shard/e5f699e2-de11-41be-bd24-e29876928f0f.731 [No data available]</div><div>[2022-03-24 04:11:15.417679] E [MSGID: 115050] [server-rpc-fops_v2.c:158:server4_lookup_cbk] 0-hdd-server: 233846649: LOOKUP /.shard/e5f699e2-de11-41be-bd24-e29876928f0f.731 (be318638-e8a0-4c6d-977d-7a937aa84806/e5f699e2-de11-41be-bd24-e29876928f0f.731), client: CTX_ID:5c068f01-30cf-44cf-a7e5-a0836312517b-GRAPH_ID:0-PID:19802-HOST:ovirthost7-PC_NAME:hdd-client-7-RECON_NO:-0, error-xlator: hdd-posix [No data available]</div><div>[2022-03-24 04:11:16.212412] E [MSGID: 113002] [posix-entry-ops.c:323:posix_lookup] 0-hdd-posix: buf->ia_gfid is null for /data/glusterfs/hdd/brick3/brick/.shard/e5f699e2-de11-41be-bd24-e29876928f0f.740 [No data available]</div><div>[2022-03-24 04:11:16.212498] E [MSGID: 115050] [server-rpc-fops_v2.c:158:server4_lookup_cbk] 0-hdd-server: 233849313: LOOKUP /.shard/e5f699e2-de11-41be-bd24-e29876928f0f.740 (be318638-e8a0-4c6d-977d-7a937aa84806/e5f699e2-de11-41be-bd24-e29876928f0f.740), client: CTX_ID:5c068f01-30cf-44cf-a7e5-a0836312517b-GRAPH_ID:0-PID:19802-HOST:ovirthost7-PC_NAME:hdd-client-7-RECON_NO:-0, error-xlator: hdd-posix [No data available]</div><div>[2022-03-24 04:11:16.212446] E [MSGID: 113002] [posix-entry-ops.c:323:posix_lookup] 0-hdd-posix: buf->ia_gfid is null for /data/glusterfs/hdd/brick3/brick/.shard/e5f699e2-de11-41be-bd24-e29876928f0f.740 [No data available]</div><div>[2022-03-24 04:11:16.212492] E [MSGID: 115050] [server-rpc-fops_v2.c:158:server4_lookup_cbk] 0-hdd-server: 233849312: LOOKUP /.shard/e5f699e2-de11-41be-bd24-e29876928f0f.740 (be318638-e8a0-4c6d-977d-7a937aa84806/e5f699e2-de11-41be-bd24-e29876928f0f.740), client: CTX_ID:5c068f01-30cf-44cf-a7e5-a0836312517b-GRAPH_ID:0-PID:19802-HOST:ovirthost7-PC_NAME:hdd-client-7-RECON_NO:-0, error-xlator: hdd-posix [No data available]</div><div>[2022-03-24 04:11:20.845696] E [MSGID: 113002] [posix-entry-ops.c:323:posix_lookup] 0-hdd-posix: buf->ia_gfid is null for /data/glusterfs/hdd/brick3/brick/.shard/e5f699e2-de11-41be-bd24-e29876928f0f.776 [No data available]</div><div>[2022-03-24 04:11:20.845823] E [MSGID: 115050] [server-rpc-fops_v2.c:158:server4_lookup_cbk] 0-hdd-server: 233852651: LOOKUP /.shard/e5f699e2-de11-41be-bd24-e29876928f0f.776 (be318638-e8a0-4c6d-977d-7a937aa84806/e5f699e2-de11-41be-bd24-e29876928f0f.776), client: CTX_ID:5c068f01-30cf-44cf-a7e5-a0836312517b-GRAPH_ID:0-PID:19802-HOST:ovirthost7-PC_NAME:hdd-client-7-RECON_NO:-0, error-xlator: hdd-posix [No data available]</div><div>[2022-03-24 04:11:22.448954] E [MSGID: 113002] [posix-entry-ops.c:323:posix_lookup] 0-hdd-posix: buf->ia_gfid is null for /data/glusterfs/hdd/brick3/brick/.shard/e5f699e2-de11-41be-bd24-e29876928f0f.788 [No data available]</div><div>[2022-03-24 04:11:22.449030] E [MSGID: 115050] [server-rpc-fops_v2.c:158:server4_lookup_cbk] 0-hdd-server: 233853935: LOOKUP /.shard/e5f699e2-de11-41be-bd24-e29876928f0f.788 (be318638-e8a0-4c6d-977d-7a937aa84806/e5f699e2-de11-41be-bd24-e29876928f0f.788), client: CTX_ID:5c068f01-30cf-44cf-a7e5-a0836312517b-GRAPH_ID:0-PID:19802-HOST:ovirthost7-PC_NAME:hdd-client-7-RECON_NO:-0, error-xlator: hdd-posix [No data available]</div><div>[2022-03-24 04:11:28.950953] E [MSGID: 113002] [posix-entry-ops.c:323:posix_lookup] 0-hdd-posix: buf->ia_gfid is null for /data/glusterfs/hdd/brick3/brick/.shard/e5f699e2-de11-41be-bd24-e29876928f0f.833 [No data available]</div><div>[2022-03-24 04:11:28.951026] E [MSGID: 115050] [server-rpc-fops_v2.c:158:server4_lookup_cbk] 0-hdd-server: 233858636: LOOKUP /.shard/e5f699e2-de11-41be-bd24-e29876928f0f.833 (be318638-e8a0-4c6d-977d-7a937aa84806/e5f699e2-de11-41be-bd24-e29876928f0f.833), client: CTX_ID:5c068f01-30cf-44cf-a7e5-a0836312517b-GRAPH_ID:0-PID:19802-HOST:ovirthost7-PC_NAME:hdd-client-7-RECON_NO:-0, error-xlator: hdd-posix [No data available]</div><div>[2022-03-24 04:11:29.223575] E [MSGID: 113002] [posix-entry-ops.c:323:posix_lookup] 0-hdd-posix: buf->ia_gfid is null for /data/glusterfs/hdd/brick3/brick/.shard/e5f699e2-de11-41be-bd24-e29876928f0f.838 [No data available]</div><div>[2022-03-24 04:11:29.223649] E [MSGID: 115050] [server-rpc-fops_v2.c:158:server4_lookup_cbk] 0-hdd-server: 233858860: LOOKUP /.shard/e5f699e2-de11-41be-bd24-e29876928f0f.838 (be318638-e8a0-4c6d-977d-7a937aa84806/e5f699e2-de11-41be-bd24-e29876928f0f.838), client: CTX_ID:5c068f01-30cf-44cf-a7e5-a0836312517b-GRAPH_ID:0-PID:19802-HOST:ovirthost7-PC_NAME:hdd-client-7-RECON_NO:-0, error-xlator: hdd-posix [No data available]</div><div>[2022-03-24 04:11:30.365150] E [MSGID: 113002] [posix-entry-ops.c:323:posix_lookup] 0-hdd-posix: buf->ia_gfid is null for /data/glusterfs/hdd/brick3/brick/.shard/e5f699e2-de11-41be-bd24-e29876928f0f.844 [No data available]</div><div>[2022-03-24 04:11:30.365311] E [MSGID: 115050] [server-rpc-fops_v2.c:158:server4_lookup_cbk] 0-hdd-server: 233859855: LOOKUP /.shard/e5f699e2-de11-41be-bd24-e29876928f0f.844 (be318638-e8a0-4c6d-977d-7a937aa84806/e5f699e2-de11-41be-bd24-e29876928f0f.844), client: CTX_ID:5c068f01-30cf-44cf-a7e5-a0836312517b-GRAPH_ID:0-PID:19802-HOST:ovirthost7-PC_NAME:hdd-client-7-RECON_NO:-0, error-xlator: hdd-posix [No data available]</div><div>[2022-03-24 04:11:31.333815] E [MSGID: 113002] [posix-entry-ops.c:323:posix_lookup] 0-hdd-posix: buf->ia_gfid is null for /data/glusterfs/hdd/brick3/brick/.shard/e5f699e2-de11-41be-bd24-e29876928f0f.850 [No data available]</div><div>[2022-03-24 04:11:31.333896] E [MSGID: 115050] [server-rpc-fops_v2.c:158:server4_lookup_cbk] 0-hdd-server: 233860102: LOOKUP /.shard/e5f699e2-de11-41be-bd24-e29876928f0f.850 (be318638-e8a0-4c6d-977d-7a937aa84806/e5f699e2-de11-41be-bd24-e29876928f0f.850), client: CTX_ID:5c068f01-30cf-44cf-a7e5-a0836312517b-GRAPH_ID:0-PID:19802-HOST:ovirthost7-PC_NAME:hdd-client-7-RECON_NO:-0, error-xlator: hdd-posix [No data available]</div><div>[2022-03-24 04:11:33.971233] E [MSGID: 113002] [posix-entry-ops.c:323:posix_lookup] 0-hdd-posix: buf->ia_gfid is null for /data/glusterfs/hdd/brick3/brick/.shard/e5f699e2-de11-41be-bd24-e29876928f0f.872 [No data available]</div><div>[2022-03-24 04:11:33.971307] E [MSGID: 115050] [server-rpc-fops_v2.c:158:server4_lookup_cbk] 0-hdd-server: 233861861: LOOKUP /.shard/e5f699e2-de11-41be-bd24-e29876928f0f.872 (be318638-e8a0-4c6d-977d-7a937aa84806/e5f699e2-de11-41be-bd24-e29876928f0f.872), client: CTX_ID:5c068f01-30cf-44cf-a7e5-a0836312517b-GRAPH_ID:0-PID:19802-HOST:ovirthost7-PC_NAME:hdd-client-7-RECON_NO:-0, error-xlator: hdd-posix [No data available]</div><div>[2022-03-24 04:11:39.857175] E [MSGID: 113002] [posix-entry-ops.c:323:posix_lookup] 0-hdd-posix: buf->ia_gfid is null for /data/glusterfs/hdd/brick3/brick/.shard/e5f699e2-de11-41be-bd24-e29876928f0f.916 [No data available]</div><div>[2022-03-24 04:11:39.857249] E [MSGID: 115050] [server-rpc-fops_v2.c:158:server4_lookup_cbk] 0-hdd-server: 233867872: LOOKUP /.shard/e5f699e2-de11-41be-bd24-e29876928f0f.916 (be318638-e8a0-4c6d-977d-7a937aa84806/e5f699e2-de11-41be-bd24-e29876928f0f.916), client: CTX_ID:5c068f01-30cf-44cf-a7e5-a0836312517b-GRAPH_ID:0-PID:19802-HOST:ovirthost7-PC_NAME:hdd-client-7-RECON_NO:-0, error-xlator: hdd-posix [No data available]</div><div>[2022-03-24 04:11:46.661176] E [MSGID: 113002] [posix-entry-ops.c:323:posix_lookup] 0-hdd-posix: buf->ia_gfid is null for /data/glusterfs/hdd/brick3/brick/.shard/e5f699e2-de11-41be-bd24-e29876928f0f.966 [No data available]</div><div>[2022-03-24 04:11:46.661265] E [MSGID: 115050] [server-rpc-fops_v2.c:158:server4_lookup_cbk] 0-hdd-server: 233871897: LOOKUP /.shard/e5f699e2-de11-41be-bd24-e29876928f0f.966 (be318638-e8a0-4c6d-977d-7a937aa84806/e5f699e2-de11-41be-bd24-e29876928f0f.966), client: CTX_ID:5c068f01-30cf-44cf-a7e5-a0836312517b-GRAPH_ID:0-PID:19802-HOST:ovirthost7-PC_NAME:hdd-client-7-RECON_NO:-0, error-xlator: hdd-posix [No data available]</div><div>[2022-03-24 04:11:49.513500] E [MSGID: 113002] [posix-entry-ops.c:323:posix_lookup] 0-hdd-posix: buf->ia_gfid is null for /data/glusterfs/hdd/brick3/brick/.shard/e5f699e2-de11-41be-bd24-e29876928f0f.993 [No data available]</div><div>[2022-03-24 04:11:49.513601] E [MSGID: 115050] [server-rpc-fops_v2.c:158:server4_lookup_cbk] 0-hdd-server: 233875442: LOOKUP /.shard/e5f699e2-de11-41be-bd24-e29876928f0f.993 (be318638-e8a0-4c6d-977d-7a937aa84806/e5f699e2-de11-41be-bd24-e29876928f0f.993), client: CTX_ID:5c068f01-30cf-44cf-a7e5-a0836312517b-GRAPH_ID:0-PID:19802-HOST:ovirthost7-PC_NAME:hdd-client-7-RECON_NO:-0, error-xlator: hdd-posix [No data available]</div><div>[2022-03-24 04:11:56.070785] E [MSGID: 113002] [posix-entry-ops.c:323:posix_lookup] 0-hdd-posix: buf->ia_gfid is null for /data/glusterfs/hdd/brick3/brick/.shard/e5f699e2-de11-41be-bd24-e29876928f0f.1035 [No data available]</div><div>[2022-03-24 04:11:56.070969] E [MSGID: 115050] [server-rpc-fops_v2.c:158:server4_lookup_cbk] 0-hdd-server: 233879371: LOOKUP /.shard/e5f699e2-de11-41be-bd24-e29876928f0f.1035 (be318638-e8a0-4c6d-977d-7a937aa84806/e5f699e2-de11-41be-bd24-e29876928f0f.1035), client: CTX_ID:5c068f01-30cf-44cf-a7e5-a0836312517b-GRAPH_ID:0-PID:19802-HOST:ovirthost7-PC_NAME:hdd-client-7-RECON_NO:-0, error-xlator: hdd-posix [No data available]</div><div>[2022-03-24 04:11:57.803408] E [MSGID: 113002] [posix-entry-ops.c:323:posix_lookup] 0-hdd-posix: buf->ia_gfid is null for /data/glusterfs/hdd/brick3/brick/.shard/e5f699e2-de11-41be-bd24-e29876928f0f.1045 [No data available]</div><div>[2022-03-24 04:11:57.803580] E [MSGID: 115050] [server-rpc-fops_v2.c:158:server4_lookup_cbk] 0-hdd-server: 233880200: LOOKUP /.shard/e5f699e2-de11-41be-bd24-e29876928f0f.1045 (be318638-e8a0-4c6d-977d-7a937aa84806/e5f699e2-de11-41be-bd24-e29876928f0f.1045), client: CTX_ID:5c068f01-30cf-44cf-a7e5-a0836312517b-GRAPH_ID:0-PID:19802-HOST:ovirthost7-PC_NAME:hdd-client-7-RECON_NO:-0, error-xlator: hdd-posix [No data available]</div><div>[2022-03-24 04:12:05.622199] E [MSGID: 113002] [posix-entry-ops.c:323:posix_lookup] 0-hdd-posix: buf->ia_gfid is null for /data/glusterfs/hdd/brick3/brick/.shard/e5f699e2-de11-41be-bd24-e29876928f0f.1094 [No data available]</div><div>[2022-03-24 04:12:05.622280] E [MSGID: 115050] [server-rpc-fops_v2.c:158:server4_lookup_cbk] 0-hdd-server: 233886998: LOOKUP /.shard/e5f699e2-de11-41be-bd24-e29876928f0f.1094 (be318638-e8a0-4c6d-977d-7a937aa84806/e5f699e2-de11-41be-bd24-e29876928f0f.1094), client: CTX_ID:5c068f01-30cf-44cf-a7e5-a0836312517b-GRAPH_ID:0-PID:19802-HOST:ovirthost7-PC_NAME:hdd-client-7-RECON_NO:-0, error-xlator: hdd-posix [No data available]</div><div>[2022-03-24 04:12:11.962162] E [MSGID: 113002] [posix-entry-ops.c:323:posix_lookup] 0-hdd-posix: buf->ia_gfid is null for /data/glusterfs/hdd/brick3/brick/.shard/e5f699e2-de11-41be-bd24-e29876928f0f.1143 [No data available]</div><div>[2022-03-24 04:12:11.962259] E [MSGID: 115050] [server-rpc-fops_v2.c:158:server4_lookup_cbk] 0-hdd-server: 233889371: LOOKUP /.shard/e5f699e2-de11-41be-bd24-e29876928f0f.1143 (be318638-e8a0-4c6d-977d-7a937aa84806/e5f699e2-de11-41be-bd24-e29876928f0f.1143), client: CTX_ID:5c068f01-30cf-44cf-a7e5-a0836312517b-GRAPH_ID:0-PID:19802-HOST:ovirthost7-PC_NAME:hdd-client-7-RECON_NO:-0, error-xlator: hdd-posix [No data available]</div><div>[2022-03-24 04:12:23.686856] E [MSGID: 113002] [posix-entry-ops.c:323:posix_lookup] 0-hdd-posix: buf->ia_gfid is null for /data/glusterfs/hdd/brick3/brick/.shard/e5f699e2-de11-41be-bd24-e29876928f0f.1229 [No data available]</div><div>[2022-03-24 04:12:23.686921] E [MSGID: 115050] [server-rpc-fops_v2.c:158:server4_lookup_cbk] 0-hdd-server: 233899515: LOOKUP /.shard/e5f699e2-de11-41be-bd24-e29876928f0f.1229 (be318638-e8a0-4c6d-977d-7a937aa84806/e5f699e2-de11-41be-bd24-e29876928f0f.1229), client: CTX_ID:5c068f01-30cf-44cf-a7e5-a0836312517b-GRAPH_ID:0-PID:19802-HOST:ovirthost7-PC_NAME:hdd-client-7-RECON_NO:-0, error-xlator: hdd-posix [No data available]</div><div>[2022-03-24 04:12:29.735429] E [MSGID: 113002] [posix-entry-ops.c:323:posix_lookup] 0-hdd-posix: buf->ia_gfid is null for /data/glusterfs/hdd/brick3/brick/.shard/e5f699e2-de11-41be-bd24-e29876928f0f.1279 [No data available]</div><div>[2022-03-24 04:12:29.735497] E [MSGID: 115050] [server-rpc-fops_v2.c:158:server4_lookup_cbk] 0-hdd-server: 233903791: LOOKUP /.shard/e5f699e2-de11-41be-bd24-e29876928f0f.1279 (be318638-e8a0-4c6d-977d-7a937aa84806/e5f699e2-de11-41be-bd24-e29876928f0f.1279), client: CTX_ID:5c068f01-30cf-44cf-a7e5-a0836312517b-GRAPH_ID:0-PID:19802-HOST:ovirthost7-PC_NAME:hdd-client-7-RECON_NO:-0, error-xlator: hdd-posix [No data available]</div><div>[2022-03-24 04:33:15.687116] I [dict.c:560:dict_get] (-->/usr/lib64/glusterfs/6.7/xlator/features/worm.so(+0x7281) [0x7f9ad24f4281] -->/usr/lib64/glusterfs/6.7/xlator/features/locks.so(+0x1c259) [0x7f9ad271a259] -->/lib64/libglusterfs.so.0(dict_get+0x94) [0x7f9ae633b254] ) 0-dict: !this || key=trusted.glusterfs.enforce-mandatory-lock [Invalid argument]</div><div>[2022-03-24 04:33:15.690034] I [dict.c:560:dict_get] (-->/usr/lib64/glusterfs/6.7/xlator/features/worm.so(+0x7281) [0x7f9ad24f4281] -->/usr/lib64/glusterfs/6.7/xlator/features/locks.so(+0x1c259) [0x7f9ad271a259] -->/lib64/libglusterfs.so.0(dict_get+0x94) [0x7f9ae633b254] ) 0-dict: !this || key=trusted.glusterfs.enforce-mandatory-lock [Invalid argument]</div><div>[2022-03-24 04:33:15.724190] I [dict.c:560:dict_get] (-->/usr/lib64/glusterfs/6.7/xlator/features/worm.so(+0x7281) [0x7f9ad24f4281] -->/usr/lib64/glusterfs/6.7/xlator/features/locks.so(+0x1c259) [0x7f9ad271a259] -->/lib64/libglusterfs.so.0(dict_get+0x94) [0x7f9ae633b254] ) 0-dict: !this || key=trusted.glusterfs.enforce-mandatory-lock [Invalid argument]</div><div>[2022-03-24 04:33:15.726588] I [dict.c:560:dict_get] (-->/usr/lib64/glusterfs/6.7/xlator/features/worm.so(+0x7281) [0x7f9ad24f4281] -->/usr/lib64/glusterfs/6.7/xlator/features/locks.so(+0x1c259) [0x7f9ad271a259] -->/lib64/libglusterfs.so.0(dict_get+0x94) [0x7f9ae633b254] ) 0-dict: !this || key=trusted.glusterfs.enforce-mandatory-lock [Invalid argument]</div><div>[2022-03-24 04:33:55.059271] I [dict.c:560:dict_get] (-->/usr/lib64/glusterfs/6.7/xlator/features/worm.so(+0x7281) [0x7f9ad24f4281] -->/usr/lib64/glusterfs/6.7/xlator/features/locks.so(+0x1c259) [0x7f9ad271a259] -->/lib64/libglusterfs.so.0(dict_get+0x94) [0x7f9ae633b254] ) 0-dict: !this || key=trusted.glusterfs.enforce-mandatory-lock [Invalid argument]</div><div>[2022-03-24 04:33:55.061848] I [dict.c:560:dict_get] (-->/usr/lib64/glusterfs/6.7/xlator/features/worm.so(+0x7281) [0x7f9ad24f4281] -->/usr/lib64/glusterfs/6.7/xlator/features/locks.so(+0x1c259) [0x7f9ad271a259] -->/lib64/libglusterfs.so.0(dict_get+0x94) [0x7f9ae633b254] ) 0-dict: !this || key=trusted.glusterfs.enforce-mandatory-lock [Invalid argument]</div><div>[2022-03-24 04:33:55.064412] I [dict.c:560:dict_get] (-->/usr/lib64/glusterfs/6.7/xlator/features/worm.so(+0x7281) [0x7f9ad24f4281] -->/usr/lib64/glusterfs/6.7/xlator/features/locks.so(+0x1c259) [0x7f9ad271a259] -->/lib64/libglusterfs.so.0(dict_get+0x94) [0x7f9ae633b254] ) 0-dict: !this || key=trusted.glusterfs.enforce-mandatory-lock [Invalid argument]</div><div>[2022-03-24 04:33:55.360570] I [dict.c:560:dict_get] (-->/usr/lib64/glusterfs/6.7/xlator/features/worm.so(+0x7281) [0x7f9ad24f4281] -->/usr/lib64/glusterfs/6.7/xlator/features/locks.so(+0x1c259) [0x7f9ad271a259] -->/lib64/libglusterfs.so.0(dict_get+0x94) [0x7f9ae633b254] ) 0-dict: !this || key=trusted.glusterfs.enforce-mandatory-lock [Invalid argument]</div><div>[2022-03-24 04:33:55.363270] I [dict.c:560:dict_get] (-->/usr/lib64/glusterfs/6.7/xlator/features/worm.so(+0x7281) [0x7f9ad24f4281] -->/usr/lib64/glusterfs/6.7/xlator/features/locks.so(+0x1c259) [0x7f9ad271a259] -->/lib64/libglusterfs.so.0(dict_get+0x94) [0x7f9ae633b254] ) 0-dict: !this || key=trusted.glusterfs.enforce-mandatory-lock [Invalid argument]</div><div>[2022-03-24 04:33:55.863351] I [dict.c:560:dict_get] (-->/usr/lib64/glusterfs/6.7/xlator/features/worm.so(+0x7281) [0x7f9ad24f4281] -->/usr/lib64/glusterfs/6.7/xlator/features/locks.so(+0x1c259) [0x7f9ad271a259] -->/lib64/libglusterfs.so.0(dict_get+0x94) [0x7f9ae633b254] ) 0-dict: !this || key=trusted.glusterfs.enforce-mandatory-lock [Invalid argument]</div><div>[2022-03-24 04:33:55.865955] I [dict.c:560:dict_get] (-->/usr/lib64/glusterfs/6.7/xlator/features/worm.so(+0x7281) [0x7f9ad24f4281] -->/usr/lib64/glusterfs/6.7/xlator/features/locks.so(+0x1c259) [0x7f9ad271a259] -->/lib64/libglusterfs.so.0(dict_get+0x94) [0x7f9ae633b254] ) 0-dict: !this || key=trusted.glusterfs.enforce-mandatory-lock [Invalid argument]</div><div>[2022-03-24 04:33:56.133097] I [dict.c:560:dict_get] (-->/usr/lib64/glusterfs/6.7/xlator/features/worm.so(+0x7281) [0x7f9ad24f4281] -->/usr/lib64/glusterfs/6.7/xlator/features/locks.so(+0x1c259) [0x7f9ad271a259] -->/lib64/libglusterfs.so.0(dict_get+0x94) [0x7f9ae633b254] ) 0-dict: !this || key=trusted.glusterfs.enforce-mandatory-lock [Invalid argument]</div><div>[2022-03-24 04:33:56.135863] I [dict.c:560:dict_get] (-->/usr/lib64/glusterfs/6.7/xlator/features/worm.so(+0x7281) [0x7f9ad24f4281] -->/usr/lib64/glusterfs/6.7/xlator/features/locks.so(+0x1c259) [0x7f9ad271a259] -->/lib64/libglusterfs.so.0(dict_get+0x94) [0x7f9ae633b254] ) 0-dict: !this || key=trusted.glusterfs.enforce-mandatory-lock [Invalid argument]</div><div>[2022-03-24 04:33:56.270570] I [dict.c:560:dict_get] (-->/usr/lib64/glusterfs/6.7/xlator/features/worm.so(+0x7281) [0x7f9ad24f4281] -->/usr/lib64/glusterfs/6.7/xlator/features/locks.so(+0x1c259) [0x7f9ad271a259] -->/lib64/libglusterfs.so.0(dict_get+0x94) [0x7f9ae633b254] ) 0-dict: !this || key=trusted.glusterfs.enforce-mandatory-lock [Invalid argument]</div><div>[2022-03-24 04:33:56.273787] I [dict.c:560:dict_get] (-->/usr/lib64/glusterfs/6.7/xlator/features/worm.so(+0x7281) [0x7f9ad24f4281] -->/usr/lib64/glusterfs/6.7/xlator/features/locks.so(+0x1c259) [0x7f9ad271a259] -->/lib64/libglusterfs.so.0(dict_get+0x94) [0x7f9ae633b254] ) 0-dict: !this || key=trusted.glusterfs.enforce-mandatory-lock [Invalid argument]</div><div>[2022-03-24 04:36:20.701273] E [MSGID: 113002] [posix-entry-ops.c:323:posix_lookup] 0-hdd-posix: buf->ia_gfid is null for /data/glusterfs/hdd/brick3/brick/.shard/c59b2064-eb85-4342-8ef6-de68c90b370c.146 [No data available]</div><div>[2022-03-24 04:36:20.701352] E [MSGID: 115050] [server-rpc-fops_v2.c:158:server4_lookup_cbk] 0-hdd-server: 234159795: LOOKUP /.shard/c59b2064-eb85-4342-8ef6-de68c90b370c.146 (be318638-e8a0-4c6d-977d-7a937aa84806/c59b2064-eb85-4342-8ef6-de68c90b370c.146), client: CTX_ID:5c068f01-30cf-44cf-a7e5-a0836312517b-GRAPH_ID:0-PID:19802-HOST:ovirthost7-PC_NAME:hdd-client-7-RECON_NO:-0, error-xlator: hdd-posix [No data available]</div><div>[2022-03-24 04:40:39.446559] E [MSGID: 113002] [posix-entry-ops.c:323:posix_lookup] 0-hdd-posix: buf->ia_gfid is null for /data/glusterfs/hdd/brick3/brick/.shard/fae7018d-3b65-4898-9853-8573051b2732.611 [No data available]</div><div>[2022-03-24 04:40:39.446641] E [MSGID: 115050] [server-rpc-fops_v2.c:158:server4_lookup_cbk] 0-hdd-server: 234513040: LOOKUP /.shard/fae7018d-3b65-4898-9853-8573051b2732.611 (be318638-e8a0-4c6d-977d-7a937aa84806/fae7018d-3b65-4898-9853-8573051b2732.611), client: CTX_ID:5c068f01-30cf-44cf-a7e5-a0836312517b-GRAPH_ID:0-PID:19802-HOST:ovirthost7-PC_NAME:hdd-client-7-RECON_NO:-0, error-xlator: hdd-posix [No data available]</div><div>[2022-03-24 05:14:26.215469] E [MSGID: 113002] [posix-entry-ops.c:323:posix_lookup] 0-hdd-posix: buf->ia_gfid is null for /data/glusterfs/hdd/brick3/brick/.shard/b55b2bd6-0bc5-4a85-9746-8c027c9f1692.6405 [No data available]</div><div>[2022-03-24 05:14:26.215565] E [MSGID: 115050] [server-rpc-fops_v2.c:158:server4_lookup_cbk] 0-hdd-server: 236262627: LOOKUP /.shard/b55b2bd6-0bc5-4a85-9746-8c027c9f1692.6405 (be318638-e8a0-4c6d-977d-7a937aa84806/b55b2bd6-0bc5-4a85-9746-8c027c9f1692.6405), client: CTX_ID:5c068f01-30cf-44cf-a7e5-a0836312517b-GRAPH_ID:0-PID:19802-HOST:ovirthost7-PC_NAME:hdd-client-7-RECON_NO:-0, error-xlator: hdd-posix [No data available]</div><div>[2022-03-24 05:15:52.811288] E [MSGID: 113002] [posix-entry-ops.c:323:posix_lookup] 0-hdd-posix: buf->ia_gfid is null for /data/glusterfs/hdd/brick3/brick/.shard/b55b2bd6-0bc5-4a85-9746-8c027c9f1692.6672 [No data available]</div><div>[2022-03-24 05:15:52.811381] E [MSGID: 115050] [server-rpc-fops_v2.c:158:server4_lookup_cbk] 0-hdd-server: 236324934: LOOKUP /.shard/b55b2bd6-0bc5-4a85-9746-8c027c9f1692.6672 (be318638-e8a0-4c6d-977d-7a937aa84806/b55b2bd6-0bc5-4a85-9746-8c027c9f1692.6672), client: CTX_ID:5c068f01-30cf-44cf-a7e5-a0836312517b-GRAPH_ID:0-PID:19802-HOST:ovirthost7-PC_NAME:hdd-client-7-RECON_NO:-0, error-xlator: hdd-posix [No data available]</div><div>[2022-03-24 06:36:22.842490] I [MSGID: 115036] [server.c:499:server_rpc_notify] 0-hdd-server: disconnecting connection from CTX_ID:6df76489-de5a-47aa-80f1-902ad0460523-GRAPH_ID:0-PID:36229-HOST:ovirthost4-PC_NAME:hdd-client-7-RECON_NO:-0</div><div>[2022-03-24 06:36:22.842600] W [inodelk.c:609:pl_inodelk_log_cleanup] 0-hdd-server: releasing lock on 30c139e0-4189-465e-b708-28808cf6bc0b held by {client=0x7f9abc0147d0, pid=267807 lk-owner=980fb960f47f0000}</div><div>[2022-03-24 06:36:22.842636] W [inodelk.c:609:pl_inodelk_log_cleanup] 0-hdd-server: releasing lock on af10ab75-2ec7-4b62-b572-3af5e287eecd held by {client=0x7f9abc0147d0, pid=272065 lk-owner=d89d0320f47f0000}</div><div>[2022-03-24 06:36:22.842671] W [inodelk.c:609:pl_inodelk_log_cleanup] 0-hdd-server: releasing lock on b147fc29-8a58-4149-a7b6-e2b0821b8f81 held by {client=0x7f9abc0147d0, pid=272067 lk-owner=b894002cf47f0000}</div><div>[2022-03-24 06:36:22.842823] I [MSGID: 115013] [server-helpers.c:320:do_fd_cleanup] 0-hdd-server: fd cleanup on /538befbf-ffa7-4a8c-8827-cee679d589f4/images/89eabcd8-75fd-4360-bbd5-cb7e18cec4ec/bd30d980-1047-436f-8b87-333f7fcb2a5d</div><div>[2022-03-24 06:36:22.842872] I [MSGID: 115013] [server-helpers.c:320:do_fd_cleanup] 0-hdd-server: fd cleanup on /538befbf-ffa7-4a8c-8827-cee679d589f4/images/615fa020-9737-4b83-a3c1-a61e32400d59/f4917758-deae-4a62-bf4d-5b9a95a7db5b</div><div>[2022-03-24 06:36:22.842924] I [MSGID: 115013] [server-helpers.c:320:do_fd_cleanup] 0-hdd-server: fd cleanup on /538befbf-ffa7-4a8c-8827-cee679d589f4/images/dfb5175f-b550-4706-926f-716dbe5e7c48/9d98e0ee-d9a8-4482-9c93-be35eb94fb11</div><div>[2022-03-24 06:42:15.807872] I [addr.c:54:compare_addr_and_update] 0-/data/glusterfs/hdd/brick3/brick: allowed = "*", received addr = "172.22.102.104"</div><div>[2022-03-24 06:42:15.807940] I [MSGID: 115029] [server-handshake.c:550:server_setvolume] 0-hdd-server: accepted client from CTX_ID:f7fa3cc8-bd32-4ac6-ad25-0e01555dee09-GRAPH_ID:0-PID:7192-HOST:ovirthost4-PC_NAME:hdd-client-7-RECON_NO:-0 (version: 6.10) with subvol /data/glusterfs/hdd/brick3/brick</div><div>[2022-03-24 07:12:26.914177] I [addr.c:54:compare_addr_and_update] 0-/data/glusterfs/hdd/brick3/brick: allowed = "*", received addr = "172.22.102.104"</div><div>[2022-03-24 07:12:26.914258] I [MSGID: 115029] [server-handshake.c:550:server_setvolume] 0-hdd-server: accepted client from CTX_ID:f7fa3cc8-bd32-4ac6-ad25-0e01555dee09-GRAPH_ID:0-PID:7192-HOST:ovirthost4-PC_NAME:hdd-client-7-RECON_NO:-1 (version: 6.10) with subvol /data/glusterfs/hdd/brick3/brick</div><div>[2022-03-27 05:44:25.879092] I [MSGID: 100011] [glusterfsd.c:1641:reincarnate] 0-glusterfsd: Fetching the volume file from server...</div></div></div><div><div> </div><div>[2022-03-27 06:14:31.791596] E [rpc-clnt.c:183:call_bail] 0-glusterfs: bailing out frame type(GlusterFS Handshake), op(GETSPEC(2)), xid = 0x1e, unique = 0, sent = 2022-03-27 05:44:25.879160, timeout = 1800 for 172.22.102.142:24007</div></div><div> </div><div>Best regards</div><div>Peter</div><div> </div><div>24.03.2022, 20:02, "Strahil Nikolov" <hunter86_bg@yahoo.com>:</div><blockquote><div>In order to troubleshoot such issues, you should start with the brick logs. Do you see any issues there ?</div><div> </div>As a workaround try to restart glusterd.service on storage2, or even better -> set the node in maintenance (with the tick to stop glusterd) and then reactivate the node.<div> </div><div>Gluster v8 and bellow are currently not supported and the chance someone to root cause it is very, very low . Upgrade oVirt to 4.4 .<div> </div><div> </div><div>Best Regards,</div><div>Strahil Nikolov</div><div> <blockquote style="margin:0 0 20px 0"> <div style="color:#6d00f6;font-family:'roboto' , sans-serif"> <div>On Thu, Mar 24, 2022 at 16:54, Peter Schmidt</div><div><<a href="mailto:peterschmidt18351@yandex.com">peterschmidt18351@yandex.com</a>> wrote:</div> </div> <div style="border-left-color:#6d00f6;border-left-style:solid;border-left-width:1px;margin:10px 0 0 0;padding:10px 0 0 20px"> <div id="29cca5fe15e36c6f51c665f0f67f6f7dyiv2778686984"><div><div><div><div><div>Hello everyone,</div><div> </div><div>I'm running an oVirt cluster on top of a distributed-replicate gluster volume and one of the bricks cannot be mounted anymore from my oVirt hosts. This morning I also noticed a stack trace and a spike in TCP connections on one of the three gluster nodes (storage2), which I have attached at the end of this mail. Only this particular brick on storage2 seems to be causing trouble:</div><div><em>Brick storage2:/data/glusterfs/hdd/brick3/brick</em></div><div><em>Status: Transport endpoint is not connected</em></div><div> </div><div>I don't know what's causing this or how to resolve this issue. I would appreciate it if someone could take a look at my logs and point me in the right direction. If any additional logs are required, please let me know. Thank you in advance!</div><div> </div><div>Operating system on all hosts: Centos 7.9.2009</div><div>oVirt version: 4.3.10.4-1</div><div>Gluster versions:</div><div>- storage1: 6.10-1</div><div>- storage2: 6.7-1</div><div>- storage3: 6.7-1</div><div> </div><div>####################################</div><div># brick is not connected/mounted on the oVirt hosts</div><div> </div><div><em>[xlator.protocol.client.hdd-client-7.priv]</em></div><div><em>fd.0.remote_fd = -1</em></div><div><em>------ = ------</em></div><div><em>granted-posix-lock[0] = owner = 9d673ffe323e25cd, cmd = F_SETLK fl_type = F_RDLCK, fl_start = 100, fl_end = 100, user_flock: l_type = F_RDLCK, l_start = 100, l_len = 1</em></div><div><em>granted-posix-lock[1] = owner = 9d673ffe323e25cd, cmd = F_SETLK fl_type = F_RDLCK, fl_start = 101, fl_end = 101, user_flock: l_type = F_RDLCK, l_start = 101, l_len = 1</em></div><div><em>------ = ------</em></div><div><em>connected = 0</em></div><div><em>total_bytes_read = <span class="1f1ea193f6735cf0wmi-callto">11383136800</span></em></div><div><em>ping_timeout = 10</em></div><div><em>total_bytes_written = <span class="1f1ea193f6735cf0wmi-callto">16699851552</span></em></div><div><em>ping_msgs_sent = 1</em></div><div><em>msgs_sent = 2</em></div><div> </div><div>####################################</div><div># mount log from one of the oVirt hosts</div><div># the IP 172.22.102.142 corresponds to my gluster node "storage2"</div><div># the port 49154 corresponds to the brick storage2:/data/glusterfs/hdd/brick3/brick      </div><div> </div><div><em>[<span class="1f1ea193f6735cf0wmi-callto">2022-03-24 10</span>:59:28.138178] W [rpc-clnt-ping.c:210:rpc_clnt_ping_cbk] 0-hdd-client-7: socket disconnected</em></div><div><em>[<span class="1f1ea193f6735cf0wmi-callto">2022-03-24 10</span>:59:38.142698] I [rpc-clnt.c:2028:rpc_clnt_reconfig] 0-hdd-client-7: changing port to 49154 (from 0)</em></div><div><em>The message "I [MSGID: 114018] [client.c:2331:client_rpc_notify] 0-hdd-client-7: disconnected from hdd-client-7. Client process will keep trying to connect to glusterd until brick's port is available" repeated 4 times between [<span class="1f1ea193f6735cf0wmi-callto">2022-03-24 10</span>:58:04.114741] and [<span class="1f1ea193f6735cf0wmi-callto">2022-03-24 10</span>:59:28.137380]</em></div><div><em>The message "W [MSGID: 114032] [client-handshake.c:1546:client_dump_version_cbk] 0-hdd-client-7: received RPC status error [Transport endpoint is not connected]" repeated 4 times between [<span class="1f1ea193f6735cf0wmi-callto">2022-03-24 10</span>:58:04.115169] and [<span class="1f1ea193f6735cf0wmi-callto">2022-03-24 10</span>:59:28.138052]</em></div><div><em>[<span class="1f1ea193f6735cf0wmi-callto">2022-03-24 10</span>:59:49.143217] C [rpc-clnt-ping.c:155:rpc_clnt_ping_timer_expired] 0-hdd-client-7: server 172.22.102.142:49154 has not responded in the last 10 seconds, disconnecting.</em></div><div><em>[<span class="1f1ea193f6735cf0wmi-callto">2022-03-24 10</span>:59:49.143838] I [MSGID: 114018] [client.c:2331:client_rpc_notify] 0-hdd-client-7: disconnected from hdd-client-7. Client process will keep trying to connect to glusterd until brick's port is available</em></div><div><em>[<span class="1f1ea193f6735cf0wmi-callto">2022-03-24 10</span>:59:49.144540] E [rpc-clnt.c:346:saved_frames_unwind] (--> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x13b)[0x7f<span class="1f1ea193f6735cf0wmi-callto">6724643</span>adb] (--> /lib64/libgfrpc.so.0(+0xd7e4)[0x7f67243ea7e4] (--> /lib64/libgfrpc.so.0(+0xd8fe)[0x7f67243ea8fe] (--> /lib64/libgfrpc.so.0(rpc_clnt_connection_cleanup+0x97)[0x7f67243eb987] (--> /lib64/libgfrpc.so.0(+0xf518)[0x7f67243ec518] ))))) 0-hdd-client-7: forced unwinding frame type(GF-DUMP) op(DUMP(1)) called at <span class="1f1ea193f6735cf0wmi-callto">2022-03-24 10</span>:59:38.145208 (xid=0x861)</em></div><div><em>[<span class="1f1ea193f6735cf0wmi-callto">2022-03-24 10</span>:59:49.144557] W [MSGID: 114032] [client-handshake.c:1546:client_dump_version_cbk] 0-hdd-client-7: received RPC status error [Transport endpoint is not connected]</em></div><div><em>[<span class="1f1ea193f6735cf0wmi-callto">2022-03-24 10</span>:59:49.144653] E [rpc-clnt.c:346:saved_frames_unwind] (--> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x13b)[0x7f<span class="1f1ea193f6735cf0wmi-callto">6724643</span>adb] (--> /lib64/libgfrpc.so.0(+0xd7e4)[0x7f67243ea7e4] (--> /lib64/libgfrpc.so.0(+0xd8fe)[0x7f67243ea8fe] (--> /lib64/libgfrpc.so.0(rpc_clnt_connection_cleanup+0x97)[0x7f67243eb987] (--> /lib64/libgfrpc.so.0(+0xf518)[0x7f67243ec518] ))))) 0-hdd-client-7: forced unwinding frame type(GF-DUMP) op(NULL(2)) called at <span class="1f1ea193f6735cf0wmi-callto">2022-03-24 10</span>:59:38.145218 (xid=0x862)</em></div><div><em>[<span class="1f1ea193f6735cf0wmi-callto">2022-03-24 10</span>:59:49.144665] W [rpc-clnt-ping.c:210:rpc_clnt_ping_cbk] 0-hdd-client-7: socket disconnected</em></div><div> </div><div>####################################</div><div># netcat/telnet to the brick's port of storage2 are working</div><div> </div><div><em>[<a href="mailto:root@storage1" rel="nofollow noopener noreferrer" target="_blank">root@storage1</a> ~]#  netcat -z -v 172.22.102.<span class="1f1ea193f6735cf0wmi-callto">142 49154</span></em></div><div><em>Connection to 172.22.102.<span class="1f1ea193f6735cf0wmi-callto">142 49154</span> port [tcp/*] succeeded!</em></div><div> </div><div><em>[<a href="mailto:root@storage3" rel="nofollow noopener noreferrer" target="_blank">root@storage3</a> ~]# netcat -z -v 172.22.102.<span class="1f1ea193f6735cf0wmi-callto">142 49154</span></em></div><div><em>Connection to 172.22.102.<span class="1f1ea193f6735cf0wmi-callto">142 49154</span> port [tcp/*] succeeded!</em></div><div> </div><div><em>[<a href="mailto:root@ovirthost1" rel="nofollow noopener noreferrer" target="_blank">root@ovirthost1</a> /var/log/glusterfs]#  netcat -z -v 172.22.102.<span class="1f1ea193f6735cf0wmi-callto">142 49154</span></em></div><div><em>Connection to 172.22.102.<span class="1f1ea193f6735cf0wmi-callto">142 49154</span> port [tcp/*] succeeded!</em></div><div> </div><div>####################################</div><div># gluster peer status - all gluster peers are connected</div><div><em>[<a href="mailto:root@storage3" rel="nofollow noopener noreferrer" target="_blank">root@storage3</a> ~]#  gluster peer status</em></div><div><em>Number of Peers: 2</em></div><div> </div><div><em>Hostname: storage1</em></div><div><em>Uuid: 055e79c2-b1ff-4a<span class="1f1ea193f6735cf0wmi-callto">82-9296-205</span>d<span class="1f1ea193f6735cf0wmi-callto">6877904</span>e</em></div><div><em>State: Peer in Cluster (Connected)</em></div><div> </div><div><em>Hostname: storage2</em></div><div><em>Uuid: d7adcb92-2e71-41a9-80d4-13180ee673cf</em></div><div><em>State: Peer in Cluster (Connected)</em></div><div> </div><div>####################################</div><div># Configuration of the volume</div><div><em>Volume Name: hdd</em></div><div><em>Type: Distributed-Replicate</em></div><div><em>Volume ID: 1b47c2f8-5024-4b85-aa7f-a3f767bb076c</em></div><div><em>Status: Started</em></div><div><em>Snapshot Count: 0</em></div><div><em>Number of Bricks: 4 x 3 = 12</em></div><div><em>Transport-type: tcp</em></div><div><em>Bricks:</em></div><div><em>Brick1: storage1:/data/glusterfs/hdd/brick1/brick</em></div><div><em>Brick2: storage2:/data/glusterfs/hdd/brick1/brick</em></div><div><em>Brick3: storage3:/data/glusterfs/hdd/brick1/brick</em></div><div><em>Brick4: storage1:/data/glusterfs/hdd/brick2/brick</em></div><div><em>Brick5: storage2:/data/glusterfs/hdd/brick2/brick</em></div><div><em>Brick6: storage3:/data/glusterfs/hdd/brick2/brick</em></div><div><em>Brick7: storage1:/data/glusterfs/hdd/brick3/brick</em></div><div><em>Brick8: storage2:/data/glusterfs/hdd/brick3/brick</em></div><div><em>Brick9: storage3:/data/glusterfs/hdd/brick3/brick</em></div><div><em>Brick10: storage1:/data/glusterfs/hdd/brick4/brick</em></div><div><em>Brick11: storage2:/data/glusterfs/hdd/brick4/brick</em></div><div><em>Brick12: storage3:/data/glusterfs/hdd/brick4/brick</em></div><div><em>Options Reconfigured:</em></div><div><em>storage.owner-gid: 36</em></div><div><em>storage.owner-uid: 36</em></div><div><em>server.event-threads: 4</em></div><div><em>client.event-threads: 4</em></div><div><em>cluster.choose-local: off</em></div><div><em>user.cifs: off</em></div><div><em>features.shard: on</em></div><div><em>cluster.shd-wait-qlength: 10000</em></div><div><em>cluster.shd-max-threads: 8</em></div><div><em>cluster.locking-scheme: granular</em></div><div><em>cluster.data-self-heal-algorithm: full</em></div><div><em>cluster.server-quorum-type: server</em></div><div><em>cluster.eager-lock: enable</em></div><div><em>network.remote-dio: enable</em></div><div><em>performance.low-prio-threads: 32</em></div><div><em>performance.io-cache: off</em></div><div><em>performance.read-ahead: off</em></div><div><em>performance.quick-read: off</em></div><div><em>auth.allow: *</em></div><div><em>network.ping-timeout: 10</em></div><div><em>cluster.quorum-type: auto</em></div><div><em>transport.address-family: inet</em></div><div><em>nfs.disable: on</em></div><div><em>performance.client-io-threads: on</em></div><div> </div><div>####################################</div><div># gluster volume status. The brick running on port 49154 is supposedly online</div><div> </div><div><em>Status of volume: hdd</em></div><div><em>Gluster process                             TCP Port  RDMA Port  Online  Pid</em></div><div><em>------------------------------------------------------------------------------</em></div><div><em>Brick storage1:/data/gluste</em></div><div><em>rfs/hdd/brick1/brick                        49158     0          Y       9142</em></div><div><em>Brick storage2:/data/gluste</em></div><div><em>rfs/hdd/brick1/brick                        49152     0          Y       115896</em></div><div><em>Brick storage3:/data/gluste</em></div><div><em>rfs/hdd/brick1/brick                        49158     0          Y       131775</em></div><div><em>Brick storage1:/data/gluste</em></div><div><em>rfs/hdd/brick2/brick                        49159     0          Y       9151</em></div><div><em>Brick storage2:/data/gluste</em></div><div><em>rfs/hdd/brick2/brick                        49153     0          Y       115904</em></div><div><em>Brick storage3:/data/gluste</em></div><div><em>rfs/hdd/brick2/brick                        49159     0          Y       131783</em></div><div><em>Brick storage1:/data/gluste</em></div><div><em>rfs/hdd/brick3/brick                        49160     0          Y       9163</em></div><div><em>Brick storage2:/data/gluste</em></div><div><em>rfs/hdd/brick3/brick                        49154     0          Y       115913</em></div><div><em>Brick storage3:/data/gluste</em></div><div><em>rfs/hdd/brick3/brick                        49160     0          Y       131792</em></div><div><em>Brick storage1:/data/gluste</em></div><div><em>rfs/hdd/brick4/brick                        49161     0          Y       9170</em></div><div><em>Brick storage2:/data/gluste</em></div><div><em>rfs/hdd/brick4/brick                        49155     0          Y       115923</em></div><div><em>Brick storage3:/data/gluste</em></div><div><em>rfs/hdd/brick4/brick                        49161     0          Y       131800</em></div><div><em>Self-heal Daemon on localhost               N/A       N/A        Y       170468</em></div><div><em>Self-heal Daemon on storage3               N/A       N/A        Y       132263</em></div><div><em>Self-heal Daemon on storage1               N/A       N/A        Y       9512</em></div><div> </div><div><em>Task Status of Volume hdd</em></div><div><em>------------------------------------------------------------------------------</em></div><div><em>There are no active volume tasks</em></div><div> </div><div>####################################</div><div># gluster volume heal hdd info split-brain. All bricks are connected and showing no entries (0), except for brick3 on storage2</div><div><em>Brick storage2:/data/glusterfs/hdd/brick3/brick</em></div><div><em>Status: Transport endpoint is not connected</em></div><div><em>Number of entries in split-brain: -</em></div><div> </div><div>####################################</div><div># gluster volume heal hdd info. Only brick3 seems to be affected and it has lots of entries. brick3 on storage2 is not connected</div><div> </div><div><em>Brick storage1:/data/glusterfs/hdd/brick3/brick</em></div><div><em>/538befbf-ffa7-4a8c-8827-cee679d589f4/images/615fa<span class="1f1ea193f6735cf0wmi-callto">020-9737-4</span>b83-a3c1-a61e32400d59/f<span class="1f1ea193f6735cf0wmi-callto">4917758</span>-deae-4a62-bf4d-5b9a95a7db5b</em></div><div><em><gfid:f3d0b19a-2544-48c5-90b7-addd561113bc></em></div><div><em>/.shard/753a8a81-bd06-4c8c-9515-d54123f6fe4d.1</em></div><div><em>/.shard/c7f5f88f-dc<span class="1f1ea193f6735cf0wmi-callto">85-4645-9178</span>-c7df8e46a99d.83</em></div><div><em>/538befbf-ffa7-4a8c-8827-cee679d589f4/images/bc4362e6-cd43-4ab8-b8fa-0ea72405b7da/ea9c0e7c-d2c7-43c8-b19f-7a3076cc6743</em></div><div><em>/.shard/dc46e963-2b<span class="1f1ea193f6735cf0wmi-callto">68-4802-9537-42</span>f25ea97ae2.10872</em></div><div><em>/.shard/dc46e963-2b<span class="1f1ea193f6735cf0wmi-callto">68-4802-9537-42</span>f25ea97ae2.1901</em></div><div><em>/538befbf-ffa7-4a8c-8827-cee679d589f4/images/e48e80fb-d42f-47a4-9a56-07fd7ad868b3/31fd839f-85bf-4c42-ac0e-7055d903df40</em></div><div><em>/.shard/82700f9b-c7e0-4568-a565-64c9a770449f.223</em></div><div><em>/.shard/82700f9b-c7e0-4568-a565-64c9a770449f.243</em></div><div><em>/.shard/dc46e963-2b<span class="1f1ea193f6735cf0wmi-callto">68-4802-9537-42</span>f25ea97ae2.10696</em></div><div><em>/.shard/dc46e963-2b<span class="1f1ea193f6735cf0wmi-callto">68-4802-9537-42</span>f25ea97ae2.10902</em></div><div><em>..</em></div><div><em>Status: Connected</em></div><div><em>Number of entries: 664</em></div><div> </div><div><em>Brick storage2:/data/glusterfs/hdd/brick3/brick</em></div><div><em>Status: Transport endpoint is not connected</em></div><div><em>Number of entries: -</em></div><div> </div><div><em>Brick storage3:/data/glusterfs/hdd/brick3/brick</em></div><div><em>/538befbf-ffa7-4a8c-8827-cee679d589f4/images/615fa<span class="1f1ea193f6735cf0wmi-callto">020-9737-4</span>b83-a3c1-a61e32400d59/f<span class="1f1ea193f6735cf0wmi-callto">4917758</span>-deae-4a62-bf4d-5b9a95a7db5b</em></div><div><em><gfid:f3d0b19a-2544-48c5-90b7-addd561113bc></em></div><div><em>/.shard/753a8a81-bd06-4c8c-9515-d54123f6fe4d.1</em></div><div><em>..</em></div><div><em>Status: Connected</em></div><div><em>Number of entries: 664</em></div><div> </div><div>####################################</div><div># /data/glusterfs/hdd/brick3 on storage2 is running inside of a software RAID</div><div> </div><div><em>md6 : active raid6 sdac1[6] sdz1[3] sdx1[1] sdad1[7] sdaa1[4] sdy1[2] sdw1[0] sdab1[5] sdae1[8]</em></div><div><em>      <span class="1f1ea193f6735cf0wmi-callto">68364119040</span> blocks super 1.2 level 6, 512k chunk, algorithm 2 [9/9] [UUUUUUUUU]</em></div><div><em>      [============>........]  check = 64.4% (6290736128/<span class="1f1ea193f6735cf0wmi-callto">9766302720</span>) finish=3220.5min speed=17985K/sec</em></div><div><em>      bitmap: 10/73 pages [40KB], 65536KB chunk</em></div><div> </div><div>####################################</div><div># glfsheal-hdd.log on storage2</div><div> </div><div><em>[<span class="1f1ea193f6735cf0wmi-callto">2022-03-24 10</span>:15:33.238884] I [MSGID: 114046] [client-handshake.c:1106:client_setvolume_cbk] 0-hdd-client-10: Connected to hdd-client-10, attached to remote volume '/data/glusterfs/hdd/brick4/brick'.</em></div><div><em>[<span class="1f1ea193f6735cf0wmi-callto">2022-03-24 10</span>:15:33.238931] I [MSGID: 108002] [afr-common.c:5607:afr_notify] 0-hdd-replicate-3: Client-quorum is met</em></div><div><em>[<span class="1f1ea193f6735cf0wmi-callto">2022-03-24 10</span>:15:33.241616] I [MSGID: 114046] [client-handshake.c:1106:client_setvolume_cbk] 0-hdd-client-11: Connected to hdd-client-11, attached to remote volume '/data/glusterfs/hdd/brick4/brick'.</em></div><div><em>[<span class="1f1ea193f6735cf0wmi-callto">2022-03-24 10</span>:15:44.078651] C [rpc-clnt-ping.c:155:rpc_clnt_ping_timer_expired] 0-hdd-client-7: server 172.22.102.142:49154 has not responded in the last 10 seconds, disconnecting.</em></div><div><em>[<span class="1f1ea193f6735cf0wmi-callto">2022-03-24 10</span>:15:44.078891] I [MSGID: 114018] [client.c:2331:client_rpc_notify] 0-hdd-client-7: disconnected from hdd-client-7. Client process will keep trying to connect to glusterd until brick's port is available</em></div><div><em>[<span class="1f1ea193f6735cf0wmi-callto">2022-03-24 10</span>:15:44.079954] E [rpc-clnt.c:346:saved_frames_unwind] (--> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x13b)[0x7fc6c0cadadb] (--> /lib64/libgfrpc.so.0(+0xd7e4)[0x7fc6c019f7e4] (--> /lib64/libgfrpc.so.0(+0xd8fe)[0x7fc6c019f8fe] (--> /lib64/libgfrpc.so.0(rpc_clnt_connection_cleanup+0x97)[0x7fc6c01a0987] (--> /lib64/libgfrpc.so.0(+0xf518)[0x7fc6c01a1518] ))))) 0-hdd-client-7: forced unwinding frame type(GF-DUMP) op(DUMP(1)) called at <span class="1f1ea193f6735cf0wmi-callto">2022-03-24 10</span>:15:33.209640 (xid=0x5)</em></div><div><em>[<span class="1f1ea193f6735cf0wmi-callto">2022-03-24 10</span>:15:44.080008] W [MSGID: 114032] [client-handshake.c:1547:client_dump_version_cbk] 0-hdd-client-7: received RPC status error [Transport endpoint is not connected]</em></div><div><em>[<span class="1f1ea193f6735cf0wmi-callto">2022-03-24 10</span>:15:44.080526] E [rpc-clnt.c:346:saved_frames_unwind] (--> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x13b)[0x7fc6c0cadadb] (--> /lib64/libgfrpc.so.0(+0xd7e4)[0x7fc6c019f7e4] (--> /lib64/libgfrpc.so.0(+0xd8fe)[0x7fc6c019f8fe] (--> /lib64/libgfrpc.so.0(rpc_clnt_connection_cleanup+0x97)[0x7fc6c01a0987] (--> /lib64/libgfrpc.so.0(+0xf518)[0x7fc6c01a1518] ))))) 0-hdd-client-7: forced unwinding frame type(GF-DUMP) op(NULL(2)) called at <span class="1f1ea193f6735cf0wmi-callto">2022-03-24 10</span>:15:33.209655 (xid=0x6)</em></div><div><em>[<span class="1f1ea193f6735cf0wmi-callto">2022-03-24 10</span>:15:44.080574] W [rpc-clnt-ping.c:210:rpc_clnt_ping_cbk] 0-hdd-client-7: socket disconnected</em></div><div> </div><div>####################################</div><div># stack trace on storage2 that happened this morning</div><div> </div><div><em>Mar 24 06:24:06 storage2 kernel: INFO: task glfs_iotwr000:115974 blocked for more than 120 seconds.</em></div><div><em>Mar 24 06:24:06 storage2 kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.</em></div><div><em>Mar 24 06:24:06 storage2 kernel: glfs_iotwr000   D ffff9b91b<span class="1f1ea193f6735cf0wmi-callto">8951070</span>     <span class="1f1ea193f6735cf0wmi-callto">0 115974</span>      1 0x<span class="1f1ea193f6735cf0wmi-callto">00000080</span></em></div><div><em>Mar 24 06:24:06 storage2 kernel: Call Trace:</em></div><div><em>Mar 24 06:24:06 storage2 kernel: [<ffffffffa1b80a29>] schedule+0x29/0x70</em></div><div><em>Mar 24 06:24:06 storage2 kernel: [<ffffffffc05056e1>] _xfs_log_force_lsn+0x2d1/0x310 [xfs]</em></div><div><em>Mar 24 06:24:06 storage2 kernel: [<ffffffffa14db4d0>] ? wake_up_state+0x20/0x20</em></div><div><em>Mar 24 06:24:06 storage2 kernel: [<ffffffffc04e5a3d>] xfs_file_fsync+0xfd/0x1c0 [xfs]</em></div><div><em>Mar 24 06:24:06 storage2 kernel: [<ffffffffa167fbf7>] do_fsync+0x67/0xb0</em></div><div><em>Mar 24 06:24:06 storage2 kernel: [<ffffffffa167ff03>] SyS_fdatasync+0x13/0x20</em></div><div><em>Mar 24 06:24:06 storage2 kernel: [<ffffffffa1b8dede>] system_call_fastpath+0x25/0x2a</em></div><div><em>Mar 24 06:24:06 storage2 kernel: INFO: task glfs_iotwr001:121353 blocked for more than 120 seconds.</em></div><div><em>Mar 24 06:24:06 storage2 kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.</em></div><div><em>Mar 24 06:24:06 storage2 kernel: glfs_iotwr001   D ffff9b9b7d4dac80     <span class="1f1ea193f6735cf0wmi-callto">0 121353</span>      1 0x<span class="1f1ea193f6735cf0wmi-callto">00000080</span></em></div><div><em>Mar 24 06:24:06 storage2 kernel: Call Trace:</em></div><div><em>Mar 24 06:24:06 storage2 kernel: [<ffffffffa1b80a29>] schedule+0x29/0x70</em></div><div><em>Mar 24 06:24:06 storage2 kernel: [<ffffffffc05056e1>] _xfs_log_force_lsn+0x2d1/0x310 [xfs]</em></div><div><em>Mar 24 06:24:06 storage2 kernel: [<ffffffffa14db4d0>] ? wake_up_state+0x20/0x20</em></div><div><em>Mar 24 06:24:06 storage2 kernel: [<ffffffffc04e5a3d>] xfs_file_fsync+0xfd/0x1c0 [xfs]</em></div><div><em>Mar 24 06:24:06 storage2 kernel: [<ffffffffa167fbf7>] do_fsync+0x67/0xb0</em></div><div><em>Mar 24 06:24:06 storage2 kernel: [<ffffffffa167ff03>] SyS_fdatasync+0x13/0x20</em></div><div><em>Mar 24 06:24:06 storage2 kernel: [<ffffffffa1b8dede>] system_call_fastpath+0x25/0x2a</em></div><div><em>Mar 24 06:24:06 storage2 kernel: INFO: task glfs_iotwr002:121354 blocked for more than 120 seconds.</em></div><div><em>Mar 24 06:24:06 storage2 kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.</em></div><div><em>Mar 24 06:24:06 storage2 kernel: glfs_iotwr002   D ffff9b9b7d75ac80     <span class="1f1ea193f6735cf0wmi-callto">0 121354</span>      1 0x<span class="1f1ea193f6735cf0wmi-callto">00000080</span></em></div><div><em>Mar 24 06:24:06 storage2 kernel: Call Trace:</em></div><div><em>Mar 24 06:24:06 storage2 kernel: [<ffffffffa1b80a29>] schedule+0x29/0x70</em></div><div><em>Mar 24 06:24:06 storage2 kernel: [<ffffffffc05056e1>] _xfs_log_force_lsn+0x2d1/0x310 [xfs]</em></div><div><em>Mar 24 06:24:06 storage2 kernel: [<ffffffffa14db4d0>] ? wake_up_state+0x20/0x20</em></div><div><em>Mar 24 06:24:06 storage2 kernel: [<ffffffffc04e5a3d>] xfs_file_fsync+0xfd/0x1c0 [xfs]</em></div><div><em>Mar 24 06:24:06 storage2 kernel: [<ffffffffa167fbf7>] do_fsync+0x67/0xb0</em></div><div><em>Mar 24 06:24:06 storage2 kernel: [<ffffffffa167ff03>] SyS_fdatasync+0x13/0x20</em></div><div><em>Mar 24 06:24:06 storage2 kernel: [<ffffffffa1b8dede>] system_call_fastpath+0x25/0x2a</em></div><div><em>Mar 24 06:24:06 storage2 kernel: INFO: task glfs_iotwr003:121355 blocked for more than 120 seconds.</em></div><div><em>Mar 24 06:24:06 storage2 kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.</em></div><div><em>Mar 24 06:24:06 storage2 kernel: glfs_iotwr003   D ffff9b9b7d51ac80     <span class="1f1ea193f6735cf0wmi-callto">0 121355</span>      1 0x<span class="1f1ea193f6735cf0wmi-callto">00000080</span></em></div><div><em>Mar 24 06:24:06 storage2 kernel: Call Trace:</em></div><div><em>Mar 24 06:24:06 storage2 kernel: [<ffffffffa1b80a29>] schedule+0x29/0x70</em></div><div><em>Mar 24 06:24:06 storage2 kernel: [<ffffffffa1b7e531>] schedule_timeout+0x221/0x2d0</em></div><div><em>Mar 24 06:24:06 storage2 kernel: [<ffffffffa14d77a9>] ? ttwu_do_wakeup+0x19/0xe0</em></div><div><em>Mar 24 06:24:06 storage2 kernel: [<ffffffffa14d78df>] ? ttwu_do_activate+0x6f/0x80</em></div><div><em>Mar 24 06:24:06 storage2 kernel: [<ffffffffa14db210>] ? try_to_wake_up+0x190/0x390</em></div><div><em>Mar 24 06:24:06 storage2 kernel: [<ffffffffa1b80ddd>] wait_for_completion+0xfd/0x140</em></div><div><em>Mar 24 06:24:06 storage2 kernel: [<ffffffffa14db4d0>] ? wake_up_state+0x20/0x20</em></div><div><em>Mar 24 06:24:06 storage2 kernel: [<ffffffffa14be9aa>] flush_work+0x10a/0x1b0</em></div><div><em>Mar 24 06:24:06 storage2 kernel: [<ffffffffa14bb6c0>] ? move_linked_works+0x90/0x90</em></div><div><em>Mar 24 06:24:06 storage2 kernel: [<ffffffffc05070ba>] xlog_cil_force_lsn+0x8a/0x210 [xfs]</em></div><div><em>Mar 24 06:24:06 storage2 kernel: [<ffffffffc<span class="1f1ea193f6735cf0wmi-callto">0505484</span>>] _xfs_log_force_lsn+0x74/0x310 [xfs]</em></div><div><em>Mar 24 06:24:06 storage2 kernel: [<ffffffffa15bcb1f>] ? filemap_fdatawait_range+0x1f/0x30</em></div><div><em>Mar 24 06:24:06 storage2 kernel: [<ffffffffa1b7fd22>] ? down_read+0x12/0x40</em></div><div><em>Mar 24 06:24:06 storage2 kernel: [<ffffffffc04e5a3d>] xfs_file_fsync+0xfd/0x1c0 [xfs]</em></div><div><em>Mar 24 06:24:06 storage2 kernel: [<ffffffffa167fbf7>] do_fsync+0x67/0xb0</em></div><div><em>Mar 24 06:24:06 storage2 kernel: [<ffffffffa167ff03>] SyS_fdatasync+0x13/0x20</em></div><div><em>Mar 24 06:24:06 storage2 kernel: [<ffffffffa1b8dede>] system_call_fastpath+0x25/0x2a</em></div><div><em>Mar 24 06:24:06 storage2 kernel: INFO: task glfs_iotwr004:121356 blocked for more than 120 seconds.</em></div><div><em>Mar 24 06:24:06 storage2 kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.</em></div><div><em>Mar 24 06:24:06 storage2 kernel: glfs_iotwr004   D ffff9b9b7d75ac80     <span class="1f1ea193f6735cf0wmi-callto">0 121356</span>      1 0x<span class="1f1ea193f6735cf0wmi-callto">00000080</span></em></div><div><em>Mar 24 06:24:06 storage2 kernel: Call Trace:</em></div><div><em>Mar 24 06:24:06 storage2 kernel: [<ffffffffa1b80a29>] schedule+0x29/0x70</em></div><div><em>Mar 24 06:24:06 storage2 kernel: [<ffffffffa1b7e531>] schedule_timeout+0x221/0x2d0</em></div><div><em>Mar 24 06:24:06 storage2 kernel: [<ffffffffa14d77a9>] ? ttwu_do_wakeup+0x19/0xe0</em></div><div><em>Mar 24 06:24:06 storage2 kernel: [<ffffffffa14d78df>] ? ttwu_do_activate+0x6f/0x80</em></div><div><em>Mar 24 06:24:06 storage2 kernel: [<ffffffffa14db210>] ? try_to_wake_up+0x190/0x390</em></div><div><em>Mar 24 06:24:06 storage2 kernel: [<ffffffffa1b80ddd>] wait_for_completion+0xfd/0x140</em></div><div><em>Mar 24 06:24:06 storage2 kernel: [<ffffffffa14db4d0>] ? wake_up_state+0x20/0x20</em></div><div><em>Mar 24 06:24:06 storage2 kernel: [<ffffffffa14be9aa>] flush_work+0x10a/0x1b0</em></div><div><em>Mar 24 06:24:06 storage2 kernel: [<ffffffffa14bb6c0>] ? move_linked_works+0x90/0x90</em></div><div><em>Mar 24 06:24:06 storage2 kernel: [<ffffffffc05070ba>] xlog_cil_force_lsn+0x8a/0x210 [xfs]</em></div><div><em>Mar 24 06:24:06 storage2 kernel: [<ffffffffc<span class="1f1ea193f6735cf0wmi-callto">0505484</span>>] _xfs_log_force_lsn+0x74/0x310 [xfs]</em></div><div><em>Mar 24 06:24:06 storage2 kernel: [<ffffffffa15bcb1f>] ? filemap_fdatawait_range+0x1f/0x30</em></div><div><em>Mar 24 06:24:06 storage2 kernel: [<ffffffffa1b7fd22>] ? down_read+0x12/0x40</em></div><div><em>Mar 24 06:24:06 storage2 kernel: [<ffffffffc04e5a3d>] xfs_file_fsync+0xfd/0x1c0 [xfs]</em></div><div><em>Mar 24 06:24:06 storage2 kernel: [<ffffffffa167fbf7>] do_fsync+0x67/0xb0</em></div><div><em>Mar 24 06:24:06 storage2 kernel: [<ffffffffa167ff03>] SyS_fdatasync+0x13/0x20</em></div><div><em>Mar 24 06:24:06 storage2 kernel: [<ffffffffa1b8dede>] system_call_fastpath+0x25/0x2a</em></div><div><em>Mar 24 06:24:06 storage2 kernel: INFO: task glfs_iotwr005:153774 blocked for more than 120 seconds.</em></div><div><em>Mar 24 06:24:06 storage2 kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.</em></div><div><em>Mar 24 06:24:06 storage2 kernel: glfs_iotwr005   D ffff9b9b7d61ac80     <span class="1f1ea193f6735cf0wmi-callto">0 153774</span>      1 0x<span class="1f1ea193f6735cf0wmi-callto">00000080</span></em></div><div><em>Mar 24 06:24:06 storage2 kernel: Call Trace:</em></div><div><em>Mar 24 06:24:06 storage2 kernel: [<ffffffffa1b80a29>] schedule+0x29/0x70</em></div><div><em>Mar 24 06:24:06 storage2 kernel: [<ffffffffa1b7e531>] schedule_timeout+0x221/0x2d0</em></div><div><em>Mar 24 06:24:06 storage2 kernel: [<ffffffffa14d77a9>] ? ttwu_do_wakeup+0x19/0xe0</em></div><div><em>Mar 24 06:24:06 storage2 kernel: [<ffffffffa14d78df>] ? ttwu_do_activate+0x6f/0x80</em></div><div><em>Mar 24 06:24:06 storage2 kernel: [<ffffffffa14db210>] ? try_to_wake_up+0x190/0x390</em></div><div><em>Mar 24 06:24:06 storage2 kernel: [<ffffffffa1b80ddd>] wait_for_completion+0xfd/0x140</em></div><div><em>Mar 24 06:24:06 storage2 kernel: [<ffffffffa14db4d0>] ? wake_up_state+0x20/0x20</em></div><div><em>Mar 24 06:24:06 storage2 kernel: [<ffffffffa14be9aa>] flush_work+0x10a/0x1b0</em></div><div><em>Mar 24 06:24:06 storage2 kernel: [<ffffffffa14bb6c0>] ? move_linked_works+0x90/0x90</em></div><div><em>Mar 24 06:24:06 storage2 kernel: [<ffffffffc05070ba>] xlog_cil_force_lsn+0x8a/0x210 [xfs]</em></div><div><em>Mar 24 06:24:06 storage2 kernel: [<ffffffffa167335b>] ? getxattr+0x11b/0x180</em></div><div><em>Mar 24 06:24:06 storage2 kernel: [<ffffffffc<span class="1f1ea193f6735cf0wmi-callto">0505484</span>>] _xfs_log_force_lsn+0x74/0x310 [xfs]</em></div><div><em>Mar 24 06:24:06 storage2 kernel: [<ffffffffa1b7fd22>] ? down_read+0x12/0x40</em></div><div><em>Mar 24 06:24:06 storage2 kernel: [<ffffffffc04e5a3d>] xfs_file_fsync+0xfd/0x1c0 [xfs]</em></div><div><em>Mar 24 06:24:06 storage2 kernel: [<ffffffffa167fbf7>] do_fsync+0x67/0xb0</em></div><div><em>Mar 24 06:24:06 storage2 kernel: [<ffffffffa167ff03>] SyS_fdatasync+0x13/0x20</em></div><div><em>Mar 24 06:24:06 storage2 kernel: [<ffffffffa1b8dede>] system_call_fastpath+0x25/0x2a</em></div><div><em>Mar 24 06:24:06 storage2 kernel: INFO: task glfs_iotwr006:153775 blocked for more than 120 seconds.</em></div><div><em>Mar 24 06:24:06 storage2 kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.</em></div><div><em>Mar 24 06:24:06 storage2 kernel: glfs_iotwr006   D ffff9b9b7d49ac80     <span class="1f1ea193f6735cf0wmi-callto">0 153775</span>      1 0x<span class="1f1ea193f6735cf0wmi-callto">00000080</span></em></div><div><em>Mar 24 06:24:06 storage2 kernel: Call Trace:</em></div><div><em>Mar 24 06:24:06 storage2 kernel: [<ffffffffa1b80a29>] schedule+0x29/0x70</em></div><div><em>Mar 24 06:24:06 storage2 kernel: [<ffffffffa1b7e531>] schedule_timeout+0x221/0x2d0</em></div><div><em>Mar 24 06:24:06 storage2 kernel: [<ffffffffa14d77a9>] ? ttwu_do_wakeup+0x19/0xe0</em></div><div><em>Mar 24 06:24:06 storage2 kernel: [<ffffffffa14d78df>] ? ttwu_do_activate+0x6f/0x80</em></div><div><em>Mar 24 06:24:06 storage2 kernel: [<ffffffffa14db210>] ? try_to_wake_up+0x190/0x390</em></div><div><em>Mar 24 06:24:06 storage2 kernel: [<ffffffffa1b80ddd>] wait_for_completion+0xfd/0x140</em></div><div><em>Mar 24 06:24:06 storage2 kernel: [<ffffffffa14db4d0>] ? wake_up_state+0x20/0x20</em></div><div><em>Mar 24 06:24:06 storage2 kernel: [<ffffffffa14be9aa>] flush_work+0x10a/0x1b0</em></div><div><em>Mar 24 06:24:06 storage2 kernel: [<ffffffffa14bb6c0>] ? move_linked_works+0x90/0x90</em></div><div><em>Mar 24 06:24:06 storage2 kernel: [<ffffffffc05070ba>] xlog_cil_force_lsn+0x8a/0x210 [xfs]</em></div><div><em>Mar 24 06:24:06 storage2 kernel: [<ffffffffa167335b>] ? getxattr+0x11b/0x180</em></div><div><em>Mar 24 06:24:06 storage2 kernel: [<ffffffffc<span class="1f1ea193f6735cf0wmi-callto">0505484</span>>] _xfs_log_force_lsn+0x74/0x310 [xfs]</em></div><div><em>Mar 24 06:24:06 storage2 kernel: [<ffffffffa1b7fd22>] ? down_read+0x12/0x40</em></div><div><em>Mar 24 06:24:06 storage2 kernel: [<ffffffffc04e5a3d>] xfs_file_fsync+0xfd/0x1c0 [xfs]</em></div><div><em>Mar 24 06:24:06 storage2 kernel: [<ffffffffa167fbf7>] do_fsync+0x67/0xb0</em></div><div><em>Mar 24 06:24:06 storage2 kernel: [<ffffffffa167ff03>] SyS_fdatasync+0x13/0x20</em></div><div><em>Mar 24 06:24:06 storage2 kernel: [<ffffffffa1b8dede>] system_call_fastpath+0x25/0x2a</em></div><div><em>Mar 24 06:24:06 storage2 kernel: INFO: task glfs_iotwr007:153776 blocked for more than 120 seconds.</em></div><div><em>Mar 24 06:24:06 storage2 kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.</em></div><div><em>Mar 24 06:24:06 storage2 kernel: glfs_iotwr007   D ffff9b9958c962a0     <span class="1f1ea193f6735cf0wmi-callto">0 153776</span>      1 0x<span class="1f1ea193f6735cf0wmi-callto">00000080</span></em></div><div><em>Mar 24 06:24:06 storage2 kernel: Call Trace:</em></div><div><em>Mar 24 06:24:06 storage2 kernel: [<ffffffffa1b80a29>] schedule+0x29/0x70</em></div><div><em>Mar 24 06:24:06 storage2 kernel: [<ffffffffa1b7e531>] schedule_timeout+0x221/0x2d0</em></div><div><em>Mar 24 06:24:06 storage2 kernel: [<ffffffffa14d7782>] ? check_preempt_curr+0x92/0xa0</em></div><div><em>Mar 24 06:24:06 storage2 kernel: [<ffffffffa14d77a9>] ? ttwu_do_wakeup+0x19/0xe0</em></div><div><em>Mar 24 06:24:06 storage2 kernel: [<ffffffffa14db210>] ? try_to_wake_up+0x190/0x390</em></div><div><em>Mar 24 06:24:06 storage2 kernel: [<ffffffffa1b80ddd>] wait_for_completion+0xfd/0x140</em></div><div><em>Mar 24 06:24:06 storage2 kernel: [<ffffffffa14db4d0>] ? wake_up_state+0x20/0x20</em></div><div><em>Mar 24 06:24:06 storage2 kernel: [<ffffffffa14be9aa>] flush_work+0x10a/0x1b0</em></div><div><em>Mar 24 06:24:06 storage2 kernel: [<ffffffffa14bb6c0>] ? move_linked_works+0x90/0x90</em></div><div><em>Mar 24 06:24:06 storage2 kernel: [<ffffffffc05070ba>] xlog_cil_force_lsn+0x8a/0x210 [xfs]</em></div><div><em>Mar 24 06:24:06 storage2 kernel: [<ffffffffa167335b>] ? getxattr+0x11b/0x180</em></div><div><em>Mar 24 06:24:06 storage2 kernel: [<ffffffffc<span class="1f1ea193f6735cf0wmi-callto">0505484</span>>] _xfs_log_force_lsn+0x74/0x310 [xfs]</em></div><div><em>Mar 24 06:24:06 storage2 kernel: [<ffffffffa1b7fd22>] ? down_read+0x12/0x40</em></div><div><em>Mar 24 06:24:06 storage2 kernel: [<ffffffffc04e5a3d>] xfs_file_fsync+0xfd/0x1c0 [xfs]</em></div><div><em>Mar 24 06:24:06 storage2 kernel: [<ffffffffa167fbf7>] do_fsync+0x67/0xb0</em></div><div><em>Mar 24 06:24:06 storage2 kernel: [<ffffffffa167ff03>] SyS_fdatasync+0x13/0x20</em></div><div><em>Mar 24 06:24:06 storage2 kernel: [<ffffffffa1b8dede>] system_call_fastpath+0x25/0x2a</em></div><div><em>Mar 24 06:24:06 storage2 kernel: INFO: task glfs_iotwr008:153777 blocked for more than 120 seconds.</em></div><div><em>Mar 24 06:24:06 storage2 kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.</em></div><div><em>Mar 24 06:24:06 storage2 kernel: glfs_iotwr008   D ffff9b9b7d61ac80     <span class="1f1ea193f6735cf0wmi-callto">0 153777</span>      1 0x<span class="1f1ea193f6735cf0wmi-callto">00000080</span></em></div><div><em>Mar 24 06:24:06 storage2 kernel: Call Trace:</em></div><div><em>Mar 24 06:24:06 storage2 kernel: [<ffffffffa1b80a29>] schedule+0x29/0x70</em></div><div><em>Mar 24 06:24:06 storage2 kernel: [<ffffffffc05056e1>] _xfs_log_force_lsn+0x2d1/0x310 [xfs]</em></div><div><em>Mar 24 06:24:06 storage2 kernel: [<ffffffffa14db4d0>] ? wake_up_state+0x20/0x20</em></div><div><em>Mar 24 06:24:06 storage2 kernel: [<ffffffffc04e5a3d>] xfs_file_fsync+0xfd/0x1c0 [xfs]</em></div><div><em>Mar 24 06:24:06 storage2 kernel: [<ffffffffa167fbf7>] do_fsync+0x67/0xb0</em></div><div><em>Mar 24 06:24:06 storage2 kernel: [<ffffffffa167ff03>] SyS_fdatasync+0x13/0x20</em></div><div><em>Mar 24 06:24:06 storage2 kernel: [<ffffffffa1b8dede>] system_call_fastpath+0x25/0x2a</em></div><div><em>Mar 24 06:24:06 storage2 kernel: INFO: task glfs_iotwr009:153778 blocked for more than 120 seconds.</em></div><div><em>Mar 24 06:24:06 storage2 kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.</em></div><div><em>Mar 24 06:24:06 storage2 kernel: glfs_iotwr009   D ffff9b9958c920e0     <span class="1f1ea193f6735cf0wmi-callto">0 153778</span>      1 0x<span class="1f1ea193f6735cf0wmi-callto">00000080</span></em></div><div><em>Mar 24 06:24:06 storage2 kernel: Call Trace:</em></div><div><em>Mar 24 06:24:06 storage2 kernel: [<ffffffffa1b80a29>] schedule+0x29/0x70</em></div><div><em>Mar 24 06:24:06 storage2 kernel: [<ffffffffc05056e1>] _xfs_log_force_lsn+0x2d1/0x310 [xfs]</em></div><div><em>Mar 24 06:24:06 storage2 kernel: [<ffffffffa14db4d0>] ? wake_up_state+0x20/0x20</em></div><div><em>Mar 24 06:24:06 storage2 kernel: [<ffffffffc04e5a3d>] xfs_file_fsync+0xfd/0x1c0 [xfs]</em></div><div><em>Mar 24 06:24:06 storage2 kernel: [<ffffffffa167fbf7>] do_fsync+0x67/0xb0</em></div><div><em>Mar 24 06:24:06 storage2 kernel: [<ffffffffa167ff03>] SyS_fdatasync+0x13/0x20</em></div><div><em>Mar 24 06:24:06 storage2 kernel: [<ffffffffa1b8dede>] system_call_fastpath+0x25/0x2a</em></div></div></div></div></div></div>________<br /><br /><br /><br />Community Meeting Calendar:<br /><br />Schedule -<br />Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC<br />Bridge: <a href="https://meet.google.com/cpu-eiue-hvk" target="_blank">https://meet.google.com/cpu-eiue-hvk</a><br />Gluster-users mailing list<br /><a href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a><br /><a href="https://lists.gluster.org/mailman/listinfo/gluster-users" target="_blank">https://lists.gluster.org/mailman/listinfo/gluster-users</a><br /> </div> </blockquote></div></div></blockquote>