<div dir="ltr"><br><br><div class="gmail_quote"><div dir="ltr">On Thu, Nov 1, 2018 at 10:08 AM Computerisms Corporation <<a href="mailto:bob@computerisms.ca">bob@computerisms.ca</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">My troubleshooting took me to confirming that all my package versions <br>
were lined up and I came to realized that I had gotten version 5.0 from <br>
the debian repos instead of the repo at <a href="http://download.gluster.org" rel="noreferrer" target="_blank">download.gluster.org</a>. I <br>
downgraded everything to 4.1.5-1 from <a href="http://gluster.org" rel="noreferrer" target="_blank">gluster.org</a>, rebooted, and messed <br>
around a bit, and my gluster is back online.<br></blockquote><div><br></div><div>Are you running centos or other distros apart from Debian? If so why don't you retry going to 5.0 with the correct base?</div><div><br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<br>
<br>
<br>
On 2018-10-31 10:32 a.m., Computerisms Corporation wrote:<br>
> forgot to add output of glusterd console when starting the volume:<br>
> <br>
> [2018-10-31 17:31:33.887923] D [MSGID: 0] <br>
> [glusterd-volume-ops.c:572:__glusterd_handle_cli_start_volume] <br>
> 0-management: Received start vol req for volume moogle-gluster<br>
> [2018-10-31 17:31:33.887976] D [MSGID: 0] <br>
> [glusterd-locks.c:573:glusterd_mgmt_v3_lock] 0-management: Trying to <br>
> acquire lock of vol moogle-gluster for <br>
> bb8c61eb-f321-4485-8a8d-ddc369ac2203 as moogle-gluster_vol<br>
> [2018-10-31 17:31:33.888171] D [MSGID: 0] <br>
> [glusterd-locks.c:657:glusterd_mgmt_v3_lock] 0-management: Lock for vol <br>
> moogle-gluster successfully held by bb8c61eb-f321-4485-8a8d-ddc369ac2203<br>
> [2018-10-31 17:31:33.888189] D [MSGID: 0] <br>
> [glusterd-locks.c:519:glusterd_multiple_mgmt_v3_lock] 0-management: <br>
> Returning 0<br>
> [2018-10-31 17:31:33.888204] D [MSGID: 0] <br>
> [glusterd-utils.c:1767:glusterd_volinfo_find] 0-management: Volume <br>
> moogle-gluster found<br>
> [2018-10-31 17:31:33.888213] D [MSGID: 0] <br>
> [glusterd-utils.c:1774:glusterd_volinfo_find] 0-management: Returning 0<br>
> [2018-10-31 17:31:33.888229] D [MSGID: 0] <br>
> [glusterd-utils.c:1767:glusterd_volinfo_find] 0-management: Volume <br>
> moogle-gluster found<br>
> [2018-10-31 17:31:33.888237] D [MSGID: 0] <br>
> [glusterd-utils.c:1774:glusterd_volinfo_find] 0-management: Returning 0<br>
> [2018-10-31 17:31:33.888247] D [MSGID: 0] <br>
> [glusterd-utils.c:1767:glusterd_volinfo_find] 0-management: Volume <br>
> moogle-gluster found<br>
> [2018-10-31 17:31:33.888256] D [MSGID: 0] <br>
> [glusterd-utils.c:1774:glusterd_volinfo_find] 0-management: Returning 0<br>
> [2018-10-31 17:31:33.888269] D [MSGID: 0] <br>
> [glusterd-utils.c:1767:glusterd_volinfo_find] 0-management: Volume <br>
> moogle-gluster found<br>
> [2018-10-31 17:31:33.888277] D [MSGID: 0] <br>
> [glusterd-utils.c:1774:glusterd_volinfo_find] 0-management: Returning 0<br>
> [2018-10-31 17:31:33.888294] D [MSGID: 0] <br>
> [glusterd-utils.c:1142:glusterd_resolve_brick] 0-management: Returning 0<br>
> [2018-10-31 17:31:33.888318] D [MSGID: 0] <br>
> [glusterd-mgmt.c:223:gd_mgmt_v3_pre_validate_fn] 0-management: OP = 5. <br>
> Returning 0<br>
> [2018-10-31 17:31:33.888668] D [MSGID: 0] <br>
> [glusterd-utils.c:1767:glusterd_volinfo_find] 0-management: Volume <br>
> moogle-gluster found<br>
> [2018-10-31 17:31:33.888682] D [MSGID: 0] <br>
> [glusterd-utils.c:1774:glusterd_volinfo_find] 0-management: Returning 0<br>
> [2018-10-31 17:31:33.888719] E [MSGID: 101012] <br>
> [common-utils.c:4070:gf_is_service_running] 0-: Unable to read pidfile: <br>
> /var/run/gluster/vols/moogle-gluster/sand1lian.computerisms.ca-var-GlusterBrick-moogle-gluster.pid <br>
> <br>
> [2018-10-31 17:31:33.888757] I <br>
> [glusterd-utils.c:6300:glusterd_brick_start] 0-management: starting a <br>
> fresh brick process for brick /var/GlusterBrick/moogle-gluster<br>
> [2018-10-31 17:31:33.898943] D [logging.c:1998:_gf_msg_internal] <br>
> 0-logging-infra: Buffer overflow of a buffer whose size limit is 5. <br>
> About to flush least recently used log message to disk<br>
> [2018-10-31 17:31:33.888780] E [MSGID: 101012] <br>
> [common-utils.c:4070:gf_is_service_running] 0-: Unable to read pidfile: <br>
> /var/run/gluster/vols/moogle-gluster/sand1lian.computerisms.ca-var-GlusterBrick-moogle-gluster.pid <br>
> <br>
> [2018-10-31 17:31:33.898942] E [MSGID: 106005] <br>
> [glusterd-utils.c:6305:glusterd_brick_start] 0-management: Unable to <br>
> start brick sand1lian.computerisms.ca:/var/GlusterBrick/moogle-gluster<br>
> [2018-10-31 17:31:33.899068] D [MSGID: 0] <br>
> [glusterd-utils.c:6315:glusterd_brick_start] 0-management: returning -107<br>
> [2018-10-31 17:31:33.899088] E [MSGID: 106122] <br>
> [glusterd-mgmt.c:308:gd_mgmt_v3_commit_fn] 0-management: Volume start <br>
> commit failed.<br>
> [2018-10-31 17:31:33.899100] D [MSGID: 0] <br>
> [glusterd-mgmt.c:392:gd_mgmt_v3_commit_fn] 0-management: OP = 5. <br>
> Returning -107<br>
> [2018-10-31 17:31:33.899114] E [MSGID: 106122] <br>
> [glusterd-mgmt.c:1557:glusterd_mgmt_v3_commit] 0-management: Commit <br>
> failed for operation Start on local node<br>
> [2018-10-31 17:31:33.899128] D [MSGID: 0] <br>
> [glusterd-op-sm.c:5109:glusterd_op_modify_op_ctx] 0-management: op_ctx <br>
> modification not required<br>
> [2018-10-31 17:31:33.899140] E [MSGID: 106122] <br>
> [glusterd-mgmt.c:2160:glusterd_mgmt_v3_initiate_all_phases] <br>
> 0-management: Commit Op Failed<br>
> [2018-10-31 17:31:33.899168] D [MSGID: 0] <br>
> [glusterd-locks.c:785:glusterd_mgmt_v3_unlock] 0-management: Trying to <br>
> release lock of vol moogle-gluster for <br>
> bb8c61eb-f321-4485-8a8d-ddc369ac2203 as moogle-gluster_vol<br>
> [2018-10-31 17:31:33.899195] D [MSGID: 0] <br>
> [glusterd-locks.c:834:glusterd_mgmt_v3_unlock] 0-management: Lock for <br>
> vol moogle-gluster successfully released<br>
> [2018-10-31 17:31:33.899211] D [MSGID: 0] <br>
> [glusterd-utils.c:1767:glusterd_volinfo_find] 0-management: Volume <br>
> moogle-gluster found<br>
> [2018-10-31 17:31:33.899221] D [MSGID: 0] <br>
> [glusterd-utils.c:1774:glusterd_volinfo_find] 0-management: Returning 0<br>
> [2018-10-31 17:31:33.899232] D [MSGID: 0] <br>
> [glusterd-locks.c:464:glusterd_multiple_mgmt_v3_unlock] 0-management: <br>
> Returning 0<br>
> [2018-10-31 17:31:33.899314] D [MSGID: 0] <br>
> [glusterd-rpc-ops.c:199:glusterd_op_send_cli_response] 0-management: <br>
> Returning 0<br>
> [2018-10-31 17:31:33.900750] D [socket.c:2927:socket_event_handler] <br>
> 0-transport: EPOLLERR - disconnecting (sock:7) (non-SSL)<br>
> [2018-10-31 17:31:33.900809] E [MSGID: 101191] <br>
> [event-epoll.c:671:event_dispatch_epoll_worker] 0-epoll: Failed to <br>
> dispatch handler<br>
> <br>
> <br>
> On 2018-10-31 10:19 a.m., Computerisms Corporation wrote:<br>
>> Hi,<br>
>><br>
>> it occurs maybe the previous email was too many words and not enough <br>
>> data. so will try to display the issue differently.<br>
>><br>
>> gluster created (single brick volume following advice from <br>
>> <a href="https://lists.gluster.org/pipermail/gluster-users/2016-October/028821.html" rel="noreferrer" target="_blank">https://lists.gluster.org/pipermail/gluster-users/2016-October/028821.html</a>): <br>
>><br>
>><br>
>> root@sand1lian:~# gluster volume create moogle-gluster <br>
>> sand1lian.computerisms.ca:/var/GlusterBrick/moogle-gluster<br>
>><br>
>> Gluster was started from cli with --debug, console reports the <br>
>> following with creation of the volume:<br>
>><br>
>> [2018-10-31 17:00:51.555918] D [MSGID: 0] <br>
>> [glusterd-volume-ops.c:328:__glusterd_handle_create_volume] <br>
>> 0-management: Received create volume req<br>
>> [2018-10-31 17:00:51.555963] D [MSGID: 0] <br>
>> [glusterd-utils.c:1774:glusterd_volinfo_find] 0-management: Returning -1<br>
>> [2018-10-31 17:00:51.556072] D [MSGID: 0] <br>
>> [glusterd-op-sm.c:209:glusterd_generate_txn_id] 0-management: <br>
>> Transaction_id = 3f5d14c9-ee08-493c-afac-d04d53c12aad<br>
>> [2018-10-31 17:00:51.556090] D [MSGID: 0] <br>
>> [glusterd-op-sm.c:302:glusterd_set_txn_opinfo] 0-management: <br>
>> Successfully set opinfo for transaction ID : <br>
>> 3f5d14c9-ee08-493c-afac-d04d53c12aad<br>
>> [2018-10-31 17:00:51.556099] D [MSGID: 0] <br>
>> [glusterd-op-sm.c:309:glusterd_set_txn_opinfo] 0-management: Returning 0<br>
>> [2018-10-31 17:00:51.556108] D [MSGID: 0] <br>
>> [glusterd-syncop.c:1809:gd_sync_task_begin] 0-management: Transaction <br>
>> ID : 3f5d14c9-ee08-493c-afac-d04d53c12aad<br>
>> [2018-10-31 17:00:51.556127] D [MSGID: 0] <br>
>> [glusterd-locks.c:573:glusterd_mgmt_v3_lock] 0-management: Trying to <br>
>> acquire lock of vol moogle-gluster for <br>
>> bb8c61eb-f321-4485-8a8d-ddc369ac2203 as moogle-gluster_vol<br>
>> [2018-10-31 17:00:51.556293] D [MSGID: 0] <br>
>> [glusterd-locks.c:657:glusterd_mgmt_v3_lock] 0-management: Lock for <br>
>> vol moogle-gluster successfully held by <br>
>> bb8c61eb-f321-4485-8a8d-ddc369ac2203<br>
>> [2018-10-31 17:00:51.556333] D [MSGID: 0] <br>
>> [glusterd-utils.c:1774:glusterd_volinfo_find] 0-management: Returning -1<br>
>> [2018-10-31 17:00:51.556368] D [logging.c:1998:_gf_msg_internal] <br>
>> 0-logging-infra: Buffer overflow of a buffer whose size limit is 5. <br>
>> About to flush least recently used log message to disk<br>
>> [2018-10-31 17:00:51.556345] D [MSGID: 0] <br>
>> [glusterd-utils.c:1774:glusterd_volinfo_find] 0-management: Returning -1<br>
>> [2018-10-31 17:00:51.556368] D [MSGID: 0] <br>
>> [glusterd-utils.c:1094:glusterd_brickinfo_new] 0-management: Returning 0<br>
>> [2018-10-31 17:00:51.556608] D [MSGID: 0] <br>
>> [glusterd-utils.c:1308:glusterd_brickinfo_new_from_brick] <br>
>> 0-management: Returning 0<br>
>> [2018-10-31 17:00:51.556656] D [MSGID: 0] <br>
>> [glusterd-utils.c:678:glusterd_volinfo_new] 0-management: Returning 0<br>
>> [2018-10-31 17:00:51.556669] D [MSGID: 0] <br>
>> [store.c:473:gf_store_handle_destroy] 0-: Returning 0<br>
>> [2018-10-31 17:00:51.556681] D [MSGID: 0] <br>
>> [glusterd-utils.c:990:glusterd_volume_brickinfos_delete] 0-management: <br>
>> Returning 0<br>
>> [2018-10-31 17:00:51.556690] D [MSGID: 0] <br>
>> [store.c:473:gf_store_handle_destroy] 0-: Returning 0<br>
>> [2018-10-31 17:00:51.556699] D [logging.c:1998:_gf_msg_internal] <br>
>> 0-logging-infra: Buffer overflow of a buffer whose size limit is 5. <br>
>> About to flush least recently used log message to disk<br>
>> The message "D [MSGID: 0] [store.c:473:gf_store_handle_destroy] 0-: <br>
>> Returning 0" repeated 3 times between [2018-10-31 17:00:51.556690] and <br>
>> [2018-10-31 17:00:51.556698]<br>
>> [2018-10-31 17:00:51.556699] D [MSGID: 0] <br>
>> [glusterd-utils.c:1042:glusterd_volinfo_delete] 0-management: Returning 0<br>
>> [2018-10-31 17:00:51.556728] D [MSGID: 0] <br>
>> [glusterd-utils.c:1094:glusterd_brickinfo_new] 0-management: Returning 0<br>
>> [2018-10-31 17:00:51.556738] D [MSGID: 0] <br>
>> [glusterd-utils.c:1308:glusterd_brickinfo_new_from_brick] <br>
>> 0-management: Returning 0<br>
>> [2018-10-31 17:00:51.556752] D [MSGID: 0] <br>
>> [glusterd-utils.c:678:glusterd_volinfo_new] 0-management: Returning 0<br>
>> [2018-10-31 17:00:51.556764] D [MSGID: 0] <br>
>> [store.c:473:gf_store_handle_destroy] 0-: Returning 0<br>
>> [2018-10-31 17:00:51.556772] D [MSGID: 0] <br>
>> [glusterd-utils.c:990:glusterd_volume_brickinfos_delete] 0-management: <br>
>> Returning 0<br>
>> [2018-10-31 17:00:51.556781] D [MSGID: 0] <br>
>> [store.c:473:gf_store_handle_destroy] 0-: Returning 0<br>
>> [2018-10-31 17:00:51.556791] D [logging.c:1998:_gf_msg_internal] <br>
>> 0-logging-infra: Buffer overflow of a buffer whose size limit is 5. <br>
>> About to flush least recently used log message to disk<br>
>> The message "D [MSGID: 0] [store.c:473:gf_store_handle_destroy] 0-: <br>
>> Returning 0" repeated 3 times between [2018-10-31 17:00:51.556781] and <br>
>> [2018-10-31 17:00:51.556790]<br>
>> [2018-10-31 17:00:51.556791] D [MSGID: 0] <br>
>> [glusterd-utils.c:1042:glusterd_volinfo_delete] 0-management: Returning 0<br>
>> [2018-10-31 17:00:51.556818] D [MSGID: 0] <br>
>> [glusterd-utils.c:1094:glusterd_brickinfo_new] 0-management: Returning 0<br>
>> [2018-10-31 17:00:51.556955] D [MSGID: 0] <br>
>> [glusterd-peer-utils.c:130:glusterd_peerinfo_find_by_hostname] <br>
>> 0-management: Unable to find friend: <a href="http://sand1lian.computerisms.ca" rel="noreferrer" target="_blank">sand1lian.computerisms.ca</a><br>
>> [2018-10-31 17:00:51.557033] D [MSGID: 0] <br>
>> [common-utils.c:3590:gf_is_local_addr] 0-management: 192.168.25.52<br>
>> [2018-10-31 17:00:51.557140] D [MSGID: 0] <br>
>> [common-utils.c:3478:gf_interface_search] 0-management: 192.168.25.52 <br>
>> is local address at interface eno1<br>
>> [2018-10-31 17:00:51.557154] D [MSGID: 0] <br>
>> [glusterd-peer-utils.c:165:glusterd_hostname_to_uuid] 0-management: <br>
>> returning 0<br>
>> [2018-10-31 17:00:51.557172] D [MSGID: 0] <br>
>> [glusterd-utils.c:1308:glusterd_brickinfo_new_from_brick] <br>
>> 0-management: Returning 0<br>
>> [2018-10-31 17:00:51.557183] D [MSGID: 0] <br>
>> [glusterd-utils.c:1142:glusterd_resolve_brick] 0-management: Returning 0<br>
>> [2018-10-31 17:00:51.557198] D [MSGID: 0] <br>
>> [glusterd-utils.c:7558:glusterd_new_brick_validate] 0-management: <br>
>> returning 0<br>
>> [2018-10-31 17:00:51.557207] D [MSGID: 0] <br>
>> [glusterd-utils.c:1142:glusterd_resolve_brick] 0-management: Returning 0<br>
>> [2018-10-31 17:00:51.557392] D [MSGID: 0] <br>
>> [glusterd-peer-utils.c:130:glusterd_peerinfo_find_by_hostname] <br>
>> 0-management: Unable to find friend: <a href="http://sand1lian.computerisms.ca" rel="noreferrer" target="_blank">sand1lian.computerisms.ca</a><br>
>> [2018-10-31 17:00:51.557468] D [MSGID: 0] <br>
>> [common-utils.c:3590:gf_is_local_addr] 0-management: 192.168.25.52<br>
>> [2018-10-31 17:00:51.557542] D [MSGID: 0] <br>
>> [common-utils.c:3478:gf_interface_search] 0-management: 192.168.25.52 <br>
>> is local address at interface eno1<br>
>> [2018-10-31 17:00:51.557554] D [MSGID: 0] <br>
>> [glusterd-peer-utils.c:165:glusterd_hostname_to_uuid] 0-management: <br>
>> returning 0<br>
>> [2018-10-31 17:00:51.557573] D [MSGID: 0] <br>
>> [store.c:473:gf_store_handle_destroy] 0-: Returning 0<br>
>> [2018-10-31 17:00:51.557586] D [MSGID: 0] <br>
>> [glusterd-volume-ops.c:1467:glusterd_op_stage_create_volume] <br>
>> 0-management: Returning 0<br>
>> [2018-10-31 17:00:51.557595] D [MSGID: 0] <br>
>> [glusterd-op-sm.c:6014:glusterd_op_stage_validate] 0-management: OP = <br>
>> 1. Returning 0<br>
>> [2018-10-31 17:00:51.557610] D [MSGID: 0] <br>
>> [glusterd-op-sm.c:7659:glusterd_op_bricks_select] 0-management: <br>
>> Returning 0<br>
>> [2018-10-31 17:00:51.557620] D [MSGID: 0] <br>
>> [glusterd-syncop.c:1751:gd_brick_op_phase] 0-management: Sent op req <br>
>> to 0 bricks<br>
>> [2018-10-31 17:00:51.557663] D [MSGID: 0] <br>
>> [glusterd-utils.c:678:glusterd_volinfo_new] 0-management: Returning 0<br>
>> [2018-10-31 17:00:51.557693] D [MSGID: 0] <br>
>> [glusterd-utils.c:1094:glusterd_brickinfo_new] 0-management: Returning 0<br>
>> [2018-10-31 17:00:51.557771] D [MSGID: 0] <br>
>> [glusterd-peer-utils.c:130:glusterd_peerinfo_find_by_hostname] <br>
>> 0-management: Unable to find friend: <a href="http://sand1lian.computerisms.ca" rel="noreferrer" target="_blank">sand1lian.computerisms.ca</a><br>
>> [2018-10-31 17:00:51.557844] D [MSGID: 0] <br>
>> [common-utils.c:3590:gf_is_local_addr] 0-management: 192.168.25.52<br>
>> [2018-10-31 17:00:51.557917] D [MSGID: 0] <br>
>> [common-utils.c:3478:gf_interface_search] 0-management: 192.168.25.52 <br>
>> is local address at interface eno1<br>
>> [2018-10-31 17:00:51.557931] D [MSGID: 0] <br>
>> [glusterd-peer-utils.c:165:glusterd_hostname_to_uuid] 0-management: <br>
>> returning 0<br>
>> [2018-10-31 17:00:51.557947] D [MSGID: 0] <br>
>> [glusterd-utils.c:1308:glusterd_brickinfo_new_from_brick] <br>
>> 0-management: Returning 0<br>
>> [2018-10-31 17:00:51.557957] D [MSGID: 0] <br>
>> [glusterd-utils.c:1142:glusterd_resolve_brick] 0-management: Returning 0<br>
>> [2018-10-31 17:00:51.558393] D [MSGID: 0] <br>
>> [xlator.c:218:xlator_volopt_dynload] 0-xlator: Returning 0<br>
>> [2018-10-31 17:00:51.558409] D [MSGID: 0] <br>
>> [glusterd-volgen.c:3140:_get_xlator_opt_key_from_vme] 0-glusterd: <br>
>> Returning 0<br>
>> [2018-10-31 17:00:51.558495] W [MSGID: 101095] <br>
>> [xlator.c:180:xlator_volopt_dynload] 0-xlator: <br>
>> /usr/lib/x86_64-linux-gnu/glusterfs/5.0/xlator/nfs/server.so: cannot <br>
>> open shared object file: No such file or directory<br>
>> [2018-10-31 17:00:51.558509] D [MSGID: 0] <br>
>> [xlator.c:218:xlator_volopt_dynload] 0-xlator: Returning -1<br>
>> [2018-10-31 17:00:51.558566] D [MSGID: 0] <br>
>> [glusterd-store.c:1107:glusterd_store_create_volume_dir] 0-management: <br>
>> Returning with 0<br>
>> [2018-10-31 17:00:51.558593] D [MSGID: 0] <br>
>> [glusterd-store.c:1125:glusterd_store_create_volume_run_dir] <br>
>> 0-management: Returning with 0<br>
>> [2018-10-31 17:00:51.899586] D [MSGID: 0] <br>
>> [store.c:432:gf_store_handle_new] 0-: Returning 0<br>
>> [2018-10-31 17:00:51.930562] D [logging.c:1998:_gf_msg_internal] <br>
>> 0-logging-infra: Buffer overflow of a buffer whose size limit is 5. <br>
>> About to flush least recently used log message to disk<br>
>> [2018-10-31 17:00:51.930485] D [MSGID: 0] <br>
>> [store.c:432:gf_store_handle_new] 0-: Returning 0<br>
>> [2018-10-31 17:00:51.930561] D [MSGID: 0] <br>
>> [store.c:386:gf_store_save_value] 0-management: returning: 0<br>
>> [2018-10-31 17:00:51.932563] D [logging.c:1998:_gf_msg_internal] <br>
>> 0-logging-infra: Buffer overflow of a buffer whose size limit is 5. <br>
>> About to flush least recently used log message to disk<br>
>> The message "D [MSGID: 0] [store.c:386:gf_store_save_value] <br>
>> 0-management: returning: 0" repeated 19 times between [2018-10-31 <br>
>> 17:00:51.930561] and [2018-10-31 17:00:51.930794]<br>
>> [2018-10-31 17:00:51.932562] D [MSGID: 0] <br>
>> [store.c:432:gf_store_handle_new] 0-: Returning 0<br>
>> [2018-10-31 17:00:51.932688] D [MSGID: 0] <br>
>> [store.c:386:gf_store_save_value] 0-management: returning: 0<br>
>> [2018-10-31 17:00:51.932709] D [MSGID: 0] <br>
>> [glusterd-store.c:457:glusterd_store_snapd_write] 0-management: <br>
>> Returning 0<br>
>> [2018-10-31 17:00:51.935196] D [MSGID: 0] <br>
>> [glusterd-store.c:521:glusterd_store_perform_snapd_store] <br>
>> 0-management: Returning 0<br>
>> [2018-10-31 17:00:51.935226] D [MSGID: 0] <br>
>> [glusterd-store.c:585:glusterd_store_snapd_info] 0-management: <br>
>> Returning with 0<br>
>> [2018-10-31 17:00:51.935251] D [MSGID: 0] <br>
>> [glusterd-store.c:788:_storeopts] 0-management: Storing in <br>
>> volinfo:key= transport.address-family, val=inet<br>
>> [2018-10-31 17:00:51.935290] D [MSGID: 0] <br>
>> [store.c:386:gf_store_save_value] 0-management: returning: 0<br>
>> [2018-10-31 17:00:51.935314] D [MSGID: 0] <br>
>> [glusterd-store.c:788:_storeopts] 0-management: Storing in <br>
>> volinfo:key= nfs.disable, val=on<br>
>> [2018-10-31 17:00:51.935344] D [MSGID: 0] <br>
>> [store.c:386:gf_store_save_value] 0-management: returning: 0<br>
>> [2018-10-31 17:00:51.935360] D [MSGID: 0] <br>
>> [glusterd-store.c:1174:glusterd_store_volinfo_write] 0-management: <br>
>> Returning 0<br>
>> [2018-10-31 17:00:51.935382] D [MSGID: 0] <br>
>> [store.c:386:gf_store_save_value] 0-management: returning: 0<br>
>> [2018-10-31 17:00:51.936584] D [MSGID: 0] <br>
>> [store.c:432:gf_store_handle_new] 0-: Returning 0<br>
>> [2018-10-31 17:00:51.936685] D [MSGID: 0] <br>
>> [store.c:386:gf_store_save_value] 0-management: returning: 0<br>
>> [2018-10-31 17:00:51.936807] D [logging.c:1998:_gf_msg_internal] <br>
>> 0-logging-infra: Buffer overflow of a buffer whose size limit is 5. <br>
>> About to flush least recently used log message to disk<br>
>> The message "D [MSGID: 0] [store.c:386:gf_store_save_value] <br>
>> 0-management: returning: 0" repeated 10 times between [2018-10-31 <br>
>> 17:00:51.936685] and [2018-10-31 17:00:51.936806]<br>
>> [2018-10-31 17:00:51.936807] D [MSGID: 0] <br>
>> [glusterd-store.c:430:glusterd_store_brickinfo_write] 0-management: <br>
>> Returning 0<br>
>> [2018-10-31 17:00:51.936833] D [MSGID: 0] <br>
>> [glusterd-store.c:481:glusterd_store_perform_brick_store] <br>
>> 0-management: Returning 0<br>
>> [2018-10-31 17:00:51.936841] D [MSGID: 0] <br>
>> [glusterd-store.c:550:glusterd_store_brickinfo] 0-management: <br>
>> Returning with 0<br>
>> [2018-10-31 17:00:51.936848] D [MSGID: 0] <br>
>> [glusterd-store.c:1394:glusterd_store_brickinfos] 0-management: <br>
>> Returning 0<br>
>> [2018-10-31 17:00:51.936856] D [MSGID: 0] <br>
>> [glusterd-store.c:1620:glusterd_store_perform_volume_store] <br>
>> 0-management: Returning 0<br>
>> [2018-10-31 17:00:51.958353] D [MSGID: 0] <br>
>> [store.c:386:gf_store_save_value] 0-management: returning: 0<br>
>> [2018-10-31 17:00:51.958494] D [logging.c:1998:_gf_msg_internal] <br>
>> 0-logging-infra: Buffer overflow of a buffer whose size limit is 5. <br>
>> About to flush least recently used log message to disk<br>
>> The message "D [MSGID: 0] [store.c:386:gf_store_save_value] <br>
>> 0-management: returning: 0" repeated 9 times between [2018-10-31 <br>
>> 17:00:51.958353] and [2018-10-31 17:00:51.958493]<br>
>> [2018-10-31 17:00:51.958493] D [MSGID: 0] <br>
>> [glusterd-store.c:1558:glusterd_store_node_state_write] 0-management: <br>
>> Returning 0<br>
>> [2018-10-31 17:00:51.960449] D [MSGID: 0] <br>
>> [glusterd-store.c:1592:glusterd_store_perform_node_state_store] <br>
>> 0-management: Returning 0<br>
>> [2018-10-31 17:00:51.960683] D [MSGID: 0] <br>
>> [glusterd-utils.c:2840:glusterd_volume_compute_cksum] 0-management: <br>
>> Returning with 0<br>
>> [2018-10-31 17:00:51.960699] D [MSGID: 0] <br>
>> [glusterd-store.c:1832:glusterd_store_volinfo] 0-management: Returning 0<br>
>> [2018-10-31 17:00:51.960797] D [MSGID: 0] <br>
>> [glusterd-utils.c:181:_brick_for_each] 0-management: Found a brick - <br>
>> sand1lian.computerisms.ca:/var/GlusterBrick/moogle-gluster<br>
>> [2018-10-31 17:00:51.961200] D [MSGID: 0] <br>
>> [glusterd-volgen.c:1309:server_check_marker_off] 0-glusterd: Returning 0<br>
>> [2018-10-31 17:00:51.961529] D [MSGID: 0] <br>
>> [glusterd-volgen.c:5816:generate_brick_volfiles] 0-management: <br>
>> Returning 0<br>
>> [2018-10-31 17:00:51.961681] D [MSGID: 0] <br>
>> [glusterd-peer-utils.c:130:glusterd_peerinfo_find_by_hostname] <br>
>> 0-management: Unable to find friend: <a href="http://sand1lian.computerisms.ca" rel="noreferrer" target="_blank">sand1lian.computerisms.ca</a><br>
>> [2018-10-31 17:00:51.961756] D [MSGID: 0] <br>
>> [common-utils.c:3590:gf_is_local_addr] 0-management: 192.168.25.52<br>
>> [2018-10-31 17:00:51.961832] D [MSGID: 0] <br>
>> [common-utils.c:3478:gf_interface_search] 0-management: 192.168.25.52 <br>
>> is local address at interface eno1<br>
>> [2018-10-31 17:00:51.961846] D [MSGID: 0] <br>
>> [glusterd-peer-utils.c:165:glusterd_hostname_to_uuid] 0-management: <br>
>> returning 0<br>
>> [2018-10-31 17:00:51.961855] D [MSGID: 0] <br>
>> [glusterd-utils.c:1668:glusterd_volume_brickinfo_get] 0-management: <br>
>> Found brick sand1lian.computerisms.ca:/var/GlusterBrick/moogle-gluster <br>
>> in volume moogle-gluster<br>
>> [2018-10-31 17:00:51.961864] D [MSGID: 0] <br>
>> [glusterd-utils.c:1677:glusterd_volume_brickinfo_get] 0-management: <br>
>> Returning 0<br>
>> [2018-10-31 17:00:51.963126] D [MSGID: 0] <br>
>> [glusterd-peer-utils.c:130:glusterd_peerinfo_find_by_hostname] <br>
>> 0-management: Unable to find friend: <a href="http://sand1lian.computerisms.ca" rel="noreferrer" target="_blank">sand1lian.computerisms.ca</a><br>
>> [2018-10-31 17:00:51.963203] D [MSGID: 0] <br>
>> [common-utils.c:3590:gf_is_local_addr] 0-management: 192.168.25.52<br>
>> [2018-10-31 17:00:51.963280] D [MSGID: 0] <br>
>> [common-utils.c:3478:gf_interface_search] 0-management: 192.168.25.52 <br>
>> is local address at interface eno1<br>
>> [2018-10-31 17:00:51.963298] D [MSGID: 0] <br>
>> [glusterd-peer-utils.c:165:glusterd_hostname_to_uuid] 0-management: <br>
>> returning 0<br>
>> [2018-10-31 17:00:51.963308] D [MSGID: 0] <br>
>> [glusterd-utils.c:1668:glusterd_volume_brickinfo_get] 0-management: <br>
>> Found brick sand1lian.computerisms.ca:/var/GlusterBrick/moogle-gluster <br>
>> in volume moogle-gluster<br>
>> [2018-10-31 17:00:51.963316] D [MSGID: 0] <br>
>> [glusterd-utils.c:1677:glusterd_volume_brickinfo_get] 0-management: <br>
>> Returning 0<br>
>> [2018-10-31 17:00:51.964038] D [MSGID: 0] <br>
>> [glusterd-peer-utils.c:130:glusterd_peerinfo_find_by_hostname] <br>
>> 0-management: Unable to find friend: <a href="http://sand1lian.computerisms.ca" rel="noreferrer" target="_blank">sand1lian.computerisms.ca</a><br>
>> [2018-10-31 17:00:51.964112] D [MSGID: 0] <br>
>> [common-utils.c:3590:gf_is_local_addr] 0-management: 192.168.25.52<br>
>> [2018-10-31 17:00:51.964186] D [MSGID: 0] <br>
>> [common-utils.c:3478:gf_interface_search] 0-management: 192.168.25.52 <br>
>> is local address at interface eno1<br>
>> [2018-10-31 17:00:51.964200] D [MSGID: 0] <br>
>> [glusterd-peer-utils.c:165:glusterd_hostname_to_uuid] 0-management: <br>
>> returning 0<br>
>> [2018-10-31 17:00:51.964211] D [MSGID: 0] <br>
>> [glusterd-utils.c:1668:glusterd_volume_brickinfo_get] 0-management: <br>
>> Found brick sand1lian.computerisms.ca:/var/GlusterBrick/moogle-gluster <br>
>> in volume moogle-gluster<br>
>> [2018-10-31 17:00:51.964226] D [MSGID: 0] <br>
>> [glusterd-utils.c:1677:glusterd_volume_brickinfo_get] 0-management: <br>
>> Returning 0<br>
>> [2018-10-31 17:00:51.965159] D [MSGID: 0] <br>
>> [glusterd-op-sm.c:6150:glusterd_op_commit_perform] 0-management: <br>
>> Returning 0<br>
>> [2018-10-31 17:00:51.965177] D [MSGID: 0] <br>
>> [glusterd-utils.c:9664:glusterd_aggr_brick_mount_dirs] 0-management: <br>
>> No brick_count present<br>
>> [2018-10-31 17:00:51.965193] D [MSGID: 0] <br>
>> [glusterd-op-sm.c:5109:glusterd_op_modify_op_ctx] 0-management: op_ctx <br>
>> modification not required<br>
>> [2018-10-31 17:00:51.965219] D [MSGID: 0] <br>
>> [glusterd-locks.c:785:glusterd_mgmt_v3_unlock] 0-management: Trying to <br>
>> release lock of vol moogle-gluster for <br>
>> bb8c61eb-f321-4485-8a8d-ddc369ac2203 as moogle-gluster_vol<br>
>> [2018-10-31 17:00:51.966350] D [MSGID: 0] <br>
>> [glusterd-locks.c:834:glusterd_mgmt_v3_unlock] 0-management: Lock for <br>
>> vol moogle-gluster successfully released<br>
>> [2018-10-31 17:00:51.966462] D [MSGID: 0] <br>
>> [glusterd-utils.c:1767:glusterd_volinfo_find] 0-management: Volume <br>
>> moogle-gluster found<br>
>> [2018-10-31 17:00:51.966479] D [MSGID: 0] <br>
>> [glusterd-utils.c:1774:glusterd_volinfo_find] 0-management: Returning 0<br>
>> [2018-10-31 17:00:51.966509] D [MSGID: 0] <br>
>> [glusterd-op-sm.c:248:glusterd_get_txn_opinfo] 0-management: <br>
>> Successfully got opinfo for transaction ID : <br>
>> 3f5d14c9-ee08-493c-afac-d04d53c12aad<br>
>> [2018-10-31 17:00:51.966532] D [MSGID: 0] <br>
>> [glusterd-op-sm.c:252:glusterd_get_txn_opinfo] 0-management: Returning 0<br>
>> [2018-10-31 17:00:51.966551] D [MSGID: 0] <br>
>> [glusterd-op-sm.c:352:glusterd_clear_txn_opinfo] 0-management: <br>
>> Successfully cleared opinfo for transaction ID : <br>
>> 3f5d14c9-ee08-493c-afac-d04d53c12aad<br>
>> [2018-10-31 17:00:51.966668] D [logging.c:1998:_gf_msg_internal] <br>
>> 0-logging-infra: Buffer overflow of a buffer whose size limit is 5. <br>
>> About to flush least recently used log message to disk<br>
>> [2018-10-31 17:00:51.966561] D [MSGID: 0] <br>
>> [glusterd-op-sm.c:356:glusterd_clear_txn_opinfo] 0-management: <br>
>> Returning 0<br>
>> [2018-10-31 17:00:51.966667] D [MSGID: 0] <br>
>> [glusterd-rpc-ops.c:199:glusterd_op_send_cli_response] 0-management: <br>
>> Returning 0<br>
>> [2018-10-31 17:00:51.968134] D [socket.c:2927:socket_event_handler] <br>
>> 0-transport: EPOLLERR - disconnecting (sock:7) (non-SSL)<br>
>> [2018-10-31 17:00:51.968183] E [MSGID: 101191] <br>
>> [event-epoll.c:671:event_dispatch_epoll_worker] 0-epoll: Failed to <br>
>> dispatch handler<br>
>> grep: /var/lib/glusterd/vols/moogle-gluster/bricks/*: No such file or <br>
>> directory<br>
>> [2018-10-31 17:00:51.975661] I [run.c:242:runner_log] <br>
>> (-->/usr/lib/x86_64-linux-gnu/glusterfs/5.0/xlator/mgmt/glusterd.so(+0xe0dbe) <br>
>> [0x7f3f248dbdbe] <br>
>> -->/usr/lib/x86_64-linux-gnu/glusterfs/5.0/xlator/mgmt/glusterd.so(+0xe07fe) <br>
>> [0x7f3f248db7fe] <br>
>> -->/usr/lib/x86_64-linux-gnu/libglusterfs.so.0(runner_log+0x105) <br>
>> [0x7f3f28ac35a5] ) 0-management: Ran script: <br>
>> /var/lib/glusterd/hooks/1/create/post/S10selinux-label-brick.sh <br>
>> --volname=moogle-gluster<br>
>> [2018-10-31 17:01:12.466614] D <br>
>> [logging.c:1871:gf_log_flush_timeout_cbk] 0-logging-infra: Log timer <br>
>> timed out. About to flush outstanding messages if present<br>
>> [2018-10-31 17:01:12.466667] D <br>
>> [logging.c:1833:__gf_log_inject_timer_event] 0-logging-infra: Starting <br>
>> timer now. Timeout = 120, current buf size = 5<br>
>> [2018-10-31 17:03:12.492414] D <br>
>> [logging.c:1871:gf_log_flush_timeout_cbk] 0-logging-infra: Log timer <br>
>> timed out. About to flush outstanding messages if present<br>
>> [2018-10-31 17:03:12.492447] D <br>
>> [logging.c:1833:__gf_log_inject_timer_event] 0-logging-infra: Starting <br>
>> timer now. Timeout = 120, current buf size = 5<br>
>><br>
>> Not sure about the unable to find friend message:<br>
>><br>
>> root@sand1lian:~# dig +short <a href="http://sand1lian.computerisms.ca" rel="noreferrer" target="_blank">sand1lian.computerisms.ca</a><br>
>> 192.168.25.52<br>
>><br>
>> start the volume:<br>
>><br>
>> root@sand1lian:~# gluster v start moogle-gluster<br>
>> volume start: moogle-gluster: failed: Commit failed on localhost. <br>
>> Please check log file for details.<br>
>><br>
>> output of cli.log while issuing start command:<br>
>><br>
>> [2018-10-31 17:08:49.019079] I [cli.c:764:main] 0-cli: Started running <br>
>> gluster with version 5.0<br>
>> [2018-10-31 17:08:49.021694] W [socket.c:3365:socket_connect] <br>
>> 0-glusterfs: Error disabling sockopt IPV6_V6ONLY: "Operation not <br>
>> supported"<br>
>> [2018-10-31 17:08:49.021924] W [socket.c:3365:socket_connect] <br>
>> 0-glusterfs: Error disabling sockopt IPV6_V6ONLY: "Operation not <br>
>> supported"<br>
>> [2018-10-31 17:08:49.101120] I [MSGID: 101190] <br>
>> [event-epoll.c:622:event_dispatch_epoll_worker] 0-epoll: Started <br>
>> thread with index 1<br>
>> [2018-10-31 17:08:49.101231] E [MSGID: 101191] <br>
>> [event-epoll.c:671:event_dispatch_epoll_worker] 0-epoll: Failed to <br>
>> dispatch handler<br>
>> [2018-10-31 17:08:49.113485] I <br>
>> [cli-rpc-ops.c:1419:gf_cli_start_volume_cbk] 0-cli: Received resp to <br>
>> start volume<br>
>> [2018-10-31 17:08:49.113626] I [input.c:31:cli_batch] 0-: Exiting <br>
>> with: -1<br>
>><br>
>> and output of brick log while starting volume:<br>
>><br>
>> [2018-10-31 17:08:49.107966] I [MSGID: 100030] <br>
>> [glusterfsd.c:2691:main] 0-/usr/sbin/glusterfsd: Started running <br>
>> /usr/sbin/glusterfsd version 5.0 (args: /usr/sbin/glusterfsd -s <br>
>> <a href="http://sand1lian.computerisms.ca" rel="noreferrer" target="_blank">sand1lian.computerisms.ca</a> --volfile-id <br>
>> moogle-gluster.sand1lian.computerisms.ca.var-GlusterBrick-moogle-gluster <br>
>> -p <br>
>> /var/run/gluster/vols/moogle-gluster/sand1lian.computerisms.ca-var-GlusterBrick-moogle-gluster.pid <br>
>> -S /var/run/gluster/f41bfcfaf40deb7d.socket --brick-name <br>
>> /var/GlusterBrick/moogle-gluster -l <br>
>> /var/log/glusterfs/bricks/var-GlusterBrick-moogle-gluster.log <br>
>> --xlator-option <br>
>> *-posix.glusterd-uuid=bb8c61eb-f321-4485-8a8d-ddc369ac2203 <br>
>> --process-name brick --brick-port 49157 --xlator-option <br>
>> moogle-gluster-server.listen-port=49157)<br>
>> [2018-10-31 17:08:49.112123] E [socket.c:3466:socket_connect] <br>
>> 0-glusterfs: connection attempt on failed, (Invalid argument)<br>
>> [2018-10-31 17:08:49.112293] I [MSGID: 101190] <br>
>> [event-epoll.c:622:event_dispatch_epoll_worker] 0-epoll: Started <br>
>> thread with index 1<br>
>> [2018-10-31 17:08:49.112374] I <br>
>> [glusterfsd-mgmt.c:2424:mgmt_rpc_notify] 0-glusterfsd-mgmt: <br>
>> disconnected from remote-host: <a href="http://sand1lian.computerisms.ca" rel="noreferrer" target="_blank">sand1lian.computerisms.ca</a><br>
>> [2018-10-31 17:08:49.112399] I <br>
>> [glusterfsd-mgmt.c:2444:mgmt_rpc_notify] 0-glusterfsd-mgmt: Exhausted <br>
>> all volfile servers<br>
>> [2018-10-31 17:08:49.112656] W [glusterfsd.c:1481:cleanup_and_exit] <br>
>> (-->/usr/lib/x86_64-linux-gnu/libgfrpc.so.0(+0xf023) [0x7f3466c12023] <br>
>> -->/usr/sbin/glusterfsd(+0x1273e) [0x557f4ea6373e] <br>
>> -->/usr/sbin/glusterfsd(cleanup_and_exit+0x54) [0x557f4ea5be94] ) 0-: <br>
>> received signum (1), shutting down<br>
>> [2018-10-31 17:08:49.112973] E [socket.c:3466:socket_connect] <br>
>> 0-glusterfs: connection attempt on failed, (Invalid argument)<br>
>> [2018-10-31 17:08:49.112996] W [rpc-clnt.c:1683:rpc_clnt_submit] <br>
>> 0-glusterfs: error returned while attempting to connect to <br>
>> host:(null), port:0<br>
>> [2018-10-31 17:08:49.113007] I <br>
>> [socket.c:3710:socket_submit_outgoing_msg] 0-glusterfs: not connected <br>
>> (priv->connected = 0)<br>
>> [2018-10-31 17:08:49.113016] W [rpc-clnt.c:1695:rpc_clnt_submit] <br>
>> 0-glusterfs: failed to submit rpc-request (unique: 0, XID: 0x2 <br>
>> Program: Gluster Portmap, ProgVers: 1, Proc: 5) to rpc-transport <br>
>> (glusterfs)<br>
>><br>
>><br>
>> still seeing the empty pid file and the connection attempt on failed, <br>
>> (Invalid argument) as the mostly likely culprits, but have read <br>
>> everything of relevance I have found on google and not discovered a <br>
>> solution yet...<br>
>><br>
>><br>
>><br>
>><br>
>><br>
>><br>
>><br>
>><br>
>><br>
>><br>
>><br>
>><br>
>><br>
>><br>
>><br>
>> On 2018-10-30 9:15 p.m., Computerisms Corporation wrote:<br>
>>> Hi,<br>
>>><br>
>>> Fortunately I am playing in a sandbox right now, but I am good and <br>
>>> stuck and hoping someone can point me in the right direction.<br>
>>><br>
>>> I have been playing for about 3 months with a gluster that currently <br>
>>> has one brick. The idea is that I have a server with data, I need to <br>
>>> migrate that server onto the new gluster-capable server, then I can <br>
>>> use the original server to make a 2nd brick, then I will be able to <br>
>>> make some room on a 3rd server for an arbiter brick. So I am <br>
>>> building and testing to be sure it all works before I try it in <br>
>>> production.<br>
>>><br>
>>> Yesterday morning I was plugging away at figuring out how to make <br>
>>> stuff work on the new gluster server when I ran into an issue trying <br>
>>> to rm -rf a directory and it telling me it wasn't empty when ls -al <br>
>>> showed that it was. This has happened to me before, and what I did <br>
>>> to fix it before was unmount the Glusterfs, go into the brick, delete <br>
>>> the files, and remount the Glusterfs. I did that and it appeared to <br>
>>> mount fine, but when I tried to access the gluster mount, it gave me <br>
>>> an error that there were too many levels of symlinks.<br>
>>><br>
>>> I spent my day yesterday trying pretty much everything I could find <br>
>>> on google and a few things I couldn't. In the past when stuff has <br>
>>> gone funny with gluster on this box, I have always shut everything <br>
>>> down and checked if there was a new version of gluster, and indeed <br>
>>> there was version 5.0 available. So I did the upgrade quite early in <br>
>>> the day. Sadly it didn't fix my problem, but it did give me an error <br>
>>> that led me to modifying my hosts file to be ipv6 resolvable. Also <br>
>>> after that, the only time the gluster would mount was at reboot, but <br>
>>> always with the symlinks error, and it wasn't really mounted as <br>
>>> reported by mount, but the directory could be unmounted.<br>
>>><br>
>>> Having struck out completely yesterday, today I decided to start with <br>
>>> a new machine. I kept a history of the commands I had used to build <br>
>>> the gluster a few months back and pasted them all in. Found that the <br>
>>> 5.0 package does not enable systemd, found that I needed the ipv6 <br>
>>> entries in the hosts file again, and also found the same problem in <br>
>>> that the glusterfs would not mount, the symlinks error at reboot, and <br>
>>> the same log entries.<br>
>>><br>
>>> I am still pretty new with gluster, so my best may not be that good, <br>
>>> but as best as I can tell the issue is that the brick will not start, <br>
>>> even with the force option. I think the problem boils down to one or <br>
>>> both of two lines in the logs. In the glusterd.log I have a line:<br>
>>><br>
>>> 0-: Unable to read pidfile: <br>
>>> /var/run/gluster/vols/moogle-gluster/sand1lian.computerisms.ca-var-GlusterBrick-moogle-gluster.pid <br>
>>><br>
>>><br>
>>> The file exists, and I can't see anything wrong with permissions on <br>
>>> the file or the file tree leading to it, but it is a zero-bit file, <br>
>>> so I am thinking the problem is not the file, but that it can't read <br>
>>> the contents of the file because there aren't any.<br>
>>><br>
>>> The other log entry is in the brick log:<br>
>>><br>
>>> 0-glusterfs: connection attempt on failed, (Invalid argument)<br>
>>><br>
>>> When I looked this up, it seems in my case there should be an attempt <br>
>>> to connect on 127.0.0.1, but given the double space I am thinking the <br>
>>> host argument is null, hence the invalid argument. It occurs that <br>
>>> maybe I still need some other entry in my hosts file to satisfy this, <br>
>>> but I can't think what it would be. I have created DNS entries; dig <br>
>>> works, and both hostname and FQDN resolve.<br>
>>><br>
>>> I have tried to change a lot of things today, so probably things are <br>
>>> buggered up beyond hope right now so even if I do find the solution <br>
>>> maybe it won't work. will wipe the new machine and start over again <br>
>>> tomorrow.<br>
>>><br>
>>> I realize the post is kinda long, sorry for that, but I want to make <br>
>>> sure I get every thing important. In fairness, though, I could <br>
>>> easily double the length of this post with possibly relevant things <br>
>>> (if you are interested). If you are still reading, thank you so <br>
>>> much, I would appreciate anything, even a wild guess, as to how to <br>
>>> move forward on this?<br>
>>><br>
>> _______________________________________________<br>
>> Gluster-users mailing list<br>
>> <a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a><br>
>> <a href="https://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">https://lists.gluster.org/mailman/listinfo/gluster-users</a><br>
> _______________________________________________<br>
> Gluster-users mailing list<br>
> <a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a><br>
> <a href="https://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">https://lists.gluster.org/mailman/listinfo/gluster-users</a><br>
_______________________________________________<br>
Gluster-users mailing list<br>
<a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a><br>
<a href="https://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">https://lists.gluster.org/mailman/listinfo/gluster-users</a></blockquote></div></div>