<div dir="ltr">Great that you have managed to figure out the issue.</div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Wed, Sep 25, 2019 at 4:47 PM TomK <<a href="mailto:tomkcpr@mdevsys.com">tomkcpr@mdevsys.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><br>
This issue looked nearly identical to:<br>
<br>
<a href="https://bugzilla.redhat.com/show_bug.cgi?id=1702316" rel="noreferrer" target="_blank">https://bugzilla.redhat.com/show_bug.cgi?id=1702316</a><br>
<br>
so tried:<br>
<br>
option transport.socket.listen-port 24007<br>
<br>
And it worked:<br>
<br>
[root@mdskvm-p01 glusterfs]# systemctl stop glusterd<br>
[root@mdskvm-p01 glusterfs]# history|grep server-quorum<br>
3149 gluster volume set mdsgv01 cluster.server-quorum-type none<br>
3186 history|grep server-quorum<br>
[root@mdskvm-p01 glusterfs]# gluster volume set mdsgv01 <br>
transport.socket.listen-port 24007<br>
Connection failed. Please check if gluster daemon is operational.<br>
[root@mdskvm-p01 glusterfs]# systemctl start glusterd<br>
[root@mdskvm-p01 glusterfs]# gluster volume set mdsgv01 <br>
transport.socket.listen-port 24007<br>
volume set: failed: option : transport.socket.listen-port does not exist<br>
Did you mean transport.keepalive or ...listen-backlog?<br>
[root@mdskvm-p01 glusterfs]#<br>
[root@mdskvm-p01 glusterfs]# netstat -pnltu<br>
Active Internet connections (only servers)<br>
Proto Recv-Q Send-Q Local Address Foreign Address <br>
State PID/Program name<br>
tcp 0 0 <a href="http://0.0.0.0:16514" rel="noreferrer" target="_blank">0.0.0.0:16514</a> 0.0.0.0:* <br>
LISTEN 4562/libvirtd<br>
tcp 0 0 <a href="http://0.0.0.0:24007" rel="noreferrer" target="_blank">0.0.0.0:24007</a> 0.0.0.0:* <br>
LISTEN 24193/glusterd<br>
tcp 0 0 <a href="http://0.0.0.0:2223" rel="noreferrer" target="_blank">0.0.0.0:2223</a> 0.0.0.0:* <br>
LISTEN 4277/sshd<br>
tcp 0 0 <a href="http://0.0.0.0:111" rel="noreferrer" target="_blank">0.0.0.0:111</a> 0.0.0.0:* <br>
LISTEN 1/systemd<br>
tcp 0 0 <a href="http://0.0.0.0:51760" rel="noreferrer" target="_blank">0.0.0.0:51760</a> 0.0.0.0:* <br>
LISTEN 4479/rpc.statd<br>
tcp 0 0 <a href="http://0.0.0.0:54322" rel="noreferrer" target="_blank">0.0.0.0:54322</a> 0.0.0.0:* <br>
LISTEN 13229/python<br>
tcp 0 0 <a href="http://0.0.0.0:22" rel="noreferrer" target="_blank">0.0.0.0:22</a> 0.0.0.0:* <br>
LISTEN 4279/sshd<br>
tcp6 0 0 :::54811 :::* <br>
LISTEN 4479/rpc.statd<br>
tcp6 0 0 :::16514 :::* <br>
LISTEN 4562/libvirtd<br>
tcp6 0 0 :::2223 :::* <br>
LISTEN 4277/sshd<br>
tcp6 0 0 :::111 :::* <br>
LISTEN 3357/rpcbind<br>
tcp6 0 0 :::54321 :::* <br>
LISTEN 13225/python2<br>
tcp6 0 0 :::22 :::* <br>
LISTEN 4279/sshd<br>
udp 0 0 <a href="http://0.0.0.0:24009" rel="noreferrer" target="_blank">0.0.0.0:24009</a> 0.0.0.0:* <br>
4281/python2<br>
udp 0 0 <a href="http://0.0.0.0:38873" rel="noreferrer" target="_blank">0.0.0.0:38873</a> 0.0.0.0:* <br>
4479/rpc.statd<br>
udp 0 0 <a href="http://0.0.0.0:111" rel="noreferrer" target="_blank">0.0.0.0:111</a> 0.0.0.0:* <br>
1/systemd<br>
udp 0 0 <a href="http://127.0.0.1:323" rel="noreferrer" target="_blank">127.0.0.1:323</a> 0.0.0.0:* <br>
3361/chronyd<br>
udp 0 0 <a href="http://127.0.0.1:839" rel="noreferrer" target="_blank">127.0.0.1:839</a> 0.0.0.0:* <br>
4479/rpc.statd<br>
udp 0 0 <a href="http://0.0.0.0:935" rel="noreferrer" target="_blank">0.0.0.0:935</a> 0.0.0.0:* <br>
3357/rpcbind<br>
udp6 0 0 :::46947 :::* <br>
4479/rpc.statd<br>
udp6 0 0 :::111 :::* <br>
3357/rpcbind<br>
udp6 0 0 ::1:323 :::* <br>
3361/chronyd<br>
udp6 0 0 :::935 :::* <br>
3357/rpcbind<br>
[root@mdskvm-p01 glusterfs]# gluster volume start mdsgv01<br>
volume start: mdsgv01: success<br>
[root@mdskvm-p01 glusterfs]# gluster volume info<br>
<br>
Volume Name: mdsgv01<br>
Type: Replicate<br>
Volume ID: f5b57076-dbd4-4d77-ae13-c1f3ee3adbe0<br>
Status: Started<br>
Snapshot Count: 0<br>
Number of Bricks: 1 x 2 = 2<br>
Transport-type: tcp<br>
Bricks:<br>
Brick1: mdskvm-p02.nix.mds.xyz:/mnt/p02-d01/glusterv02<br>
Brick2: mdskvm-p01.nix.mds.xyz:/mnt/p01-d01/glusterv01<br>
Options Reconfigured:<br>
storage.owner-gid: 36<br>
cluster.data-self-heal-algorithm: full<br>
performance.low-prio-threads: 32<br>
features.shard-block-size: 512MB<br>
features.shard: on<br>
storage.owner-uid: 36<br>
cluster.server-quorum-type: none<br>
cluster.quorum-type: none<br>
server.event-threads: 8<br>
client.event-threads: 8<br>
performance.write-behind-window-size: 8MB<br>
performance.io-thread-count: 16<br>
performance.cache-size: 1GB<br>
nfs.trusted-sync: on<br>
server.allow-insecure: on<br>
performance.readdir-ahead: on<br>
diagnostics.brick-log-level: DEBUG<br>
diagnostics.brick-sys-log-level: INFO<br>
diagnostics.client-log-level: DEBUG<br>
[root@mdskvm-p01 glusterfs]# gluster volume status<br>
Status of volume: mdsgv01<br>
Gluster process TCP Port RDMA Port Online Pid<br>
------------------------------------------------------------------------------<br>
Brick mdskvm-p01.nix.mds.xyz:/mnt/p01-d01/g<br>
lusterv01 49152 0 Y <br>
24487<br>
NFS Server on localhost N/A N/A N N/A<br>
Self-heal Daemon on localhost N/A N/A Y <br>
24515<br>
<br>
Task Status of Volume mdsgv01<br>
------------------------------------------------------------------------------<br>
There are no active volume tasks<br>
<br>
[root@mdskvm-p01 glusterfs]# cat /etc/glusterfs/glusterd.vol<br>
volume management<br>
type mgmt/glusterd<br>
option working-directory /var/lib/glusterd<br>
option transport-type socket,rdma<br>
option transport.socket.keepalive-time 10<br>
option transport.socket.keepalive-interval 2<br>
option transport.socket.read-fail-log off<br>
option ping-timeout 0<br>
option event-threads 1<br>
option rpc-auth-allow-insecure on<br>
option cluster.server-quorum-type none<br>
option cluster.quorum-type none<br>
# option cluster.server-quorum-type server<br>
# option cluster.quorum-type auto<br>
option server.event-threads 8<br>
option client.event-threads 8<br>
option performance.write-behind-window-size 8MB<br>
option performance.io-thread-count 16<br>
option performance.cache-size 1GB<br>
option nfs.trusted-sync on<br>
option storage.owner-uid 36<br>
option storage.owner-uid 36<br>
option cluster.data-self-heal-algorithm full<br>
option performance.low-prio-threads 32<br>
option features.shard-block-size 512MB<br>
option features.shard on<br>
option transport.socket.listen-port 24007<br>
end-volume<br>
[root@mdskvm-p01 glusterfs]#<br>
<br>
<br>
Cheers,<br>
TK<br>
<br>
<br>
On 9/25/2019 7:05 AM, TomK wrote:<br>
> Mind you, I just upgraded from 3.12 to 6.X.<br>
> <br>
> On 9/25/2019 6:56 AM, TomK wrote:<br>
>><br>
>><br>
>> Brick log for specific gluster start command attempt (full log attached):<br>
>><br>
>> [2019-09-25 10:53:37.847426] I [MSGID: 100030] <br>
>> [glusterfsd.c:2847:main] 0-/usr/sbin/glusterfsd: Started running <br>
>> /usr/sbin/glusterfsd version 6.5 (args: /usr/sbin/glusterfsd -s <br>
>> <a href="http://mdskvm-p01.nix.mds.xyz" rel="noreferrer" target="_blank">mdskvm-p01.nix.mds.xyz</a> --volfile-id <br>
>> mdsgv01.mdskvm-p01.nix.mds.xyz.mnt-p01-d01-glusterv01 -p <br>
>> /var/run/gluster/vols/mdsgv01/mdskvm-p01.nix.mds.xyz-mnt-p01-d01-glusterv01.pid <br>
>> -S /var/run/gluster/defbdb699838d53b.socket --brick-name <br>
>> /mnt/p01-d01/glusterv01 -l <br>
>> /var/log/glusterfs/bricks/mnt-p01-d01-glusterv01.log --xlator-option <br>
>> *-posix.glusterd-uuid=f7336db6-22b4-497d-8c2f-04c833a28546 <br>
>> --process-name brick --brick-port 49155 --xlator-option <br>
>> mdsgv01-server.listen-port=49155)<br>
>> [2019-09-25 10:53:37.848508] I [glusterfsd.c:2556:daemonize] <br>
>> 0-glusterfs: Pid of current running process is 23133<br>
>> [2019-09-25 10:53:37.858381] I [socket.c:902:__socket_server_bind] <br>
>> 0-socket.glusterfsd: closing (AF_UNIX) reuse check socket 9<br>
>> [2019-09-25 10:53:37.865940] I [MSGID: 101190] <br>
>> [event-epoll.c:680:event_dispatch_epoll_worker] 0-epoll: Started <br>
>> thread with index 0<br>
>> [2019-09-25 10:53:37.866054] I <br>
>> [glusterfsd-mgmt.c:2443:mgmt_rpc_notify] 0-glusterfsd-mgmt: <br>
>> disconnected from remote-host: <a href="http://mdskvm-p01.nix.mds.xyz" rel="noreferrer" target="_blank">mdskvm-p01.nix.mds.xyz</a><br>
>> [2019-09-25 10:53:37.866043] I [MSGID: 101190] <br>
>> [event-epoll.c:680:event_dispatch_epoll_worker] 0-epoll: Started <br>
>> thread with index 1<br>
>> [2019-09-25 10:53:37.866083] I <br>
>> [glusterfsd-mgmt.c:2463:mgmt_rpc_notify] 0-glusterfsd-mgmt: Exhausted <br>
>> all volfile servers<br>
>> [2019-09-25 10:53:37.866454] W [glusterfsd.c:1570:cleanup_and_exit] <br>
>> (-->/lib64/libgfrpc.so.0(+0xf1d3) [0x7f9680ee91d3] <br>
>> -->/usr/sbin/glusterfsd(+0x12fef) [0x55ca25710fef] <br>
>> -->/usr/sbin/glusterfsd(cleanup_and_exit+0x6b) [0x55ca2570901b] ) 0-: <br>
>> received signum (1), shutting down<br>
>> [2019-09-25 10:53:37.872399] I <br>
>> [socket.c:3754:socket_submit_outgoing_msg] 0-glusterfs: not connected <br>
>> (priv->connected = 0)<br>
>> [2019-09-25 10:53:37.872445] W [rpc-clnt.c:1704:rpc_clnt_submit] <br>
>> 0-glusterfs: failed to submit rpc-request (unique: 0, XID: 0x2 <br>
>> Program: Gluster Portmap, ProgVers: 1, Proc: 5) to rpc-transport <br>
>> (glusterfs)<br>
>> [2019-09-25 10:53:37.872534] W [glusterfsd.c:1570:cleanup_and_exit] <br>
>> (-->/lib64/libgfrpc.so.0(+0xf1d3) [0x7f9680ee91d3] <br>
>> -->/usr/sbin/glusterfsd(+0x12fef) [0x55ca25710fef] <br>
>> -->/usr/sbin/glusterfsd(cleanup_and_exit+0x6b) [0x55ca2570901b] ) 0-: <br>
>> received signum (1), shutting down<br>
>><br>
>><br>
>><br>
>><br>
>><br>
>> On 9/25/2019 6:48 AM, TomK wrote:<br>
>>> Attached.<br>
>>><br>
>>><br>
>>> On 9/25/2019 5:08 AM, Sanju Rakonde wrote:<br>
>>>> Hi, The below errors indicate that brick process is failed to start. <br>
>>>> Please attach brick log.<br>
>>>><br>
>>>> [glusterd-utils.c:6312:glusterd_brick_start] 0-management: starting a<br>
>>>> fresh brick process for brick /mnt/p01-d01/glusterv01<br>
>>>> [2019-09-25 05:17:26.722717] E [MSGID: 106005]<br>
>>>> [glusterd-utils.c:6317:glusterd_brick_start] 0-management: Unable to<br>
>>>> start brick mdskvm-p01.nix.mds.xyz:/mnt/p01-d01/glusterv01<br>
>>>> [2019-09-25 05:17:26.722960] D [MSGID: 0]<br>
>>>> [glusterd-utils.c:6327:glusterd_brick_start] 0-management: returning <br>
>>>> -107<br>
>>>> [2019-09-25 05:17:26.723006] E [MSGID: 106122]<br>
>>>> [glusterd-mgmt.c:341:gd_mgmt_v3_commit_fn] 0-management: Volume start<br>
>>>> commit failed.<br>
>>>><br>
>>>> On Wed, Sep 25, 2019 at 11:00 AM TomK <<a href="mailto:tomkcpr@mdevsys.com" target="_blank">tomkcpr@mdevsys.com</a> <br>
>>>> <mailto:<a href="mailto:tomkcpr@mdevsys.com" target="_blank">tomkcpr@mdevsys.com</a>>> wrote:<br>
>>>><br>
>>>> Hey All,<br>
>>>><br>
>>>> I'm getting the below error when trying to start a 2 node Gluster<br>
>>>> cluster.<br>
>>>><br>
>>>> I had the quorum enabled when I was at version 3.12 . However with<br>
>>>> this<br>
>>>> version it needed the quorum disabled. So I did so however now <br>
>>>> see the<br>
>>>> subject error.<br>
>>>><br>
>>>> Any ideas what I could try next?<br>
>>>><br>
>>>> -- Thx,<br>
>>>> TK.<br>
>>>><br>
>>>><br>
>>>> [2019-09-25 05:17:26.615203] D [MSGID: 0]<br>
>>>> [glusterd-utils.c:1136:glusterd_resolve_brick] 0-management: <br>
>>>> Returning 0<br>
>>>> [2019-09-25 05:17:26.615555] D [MSGID: 0]<br>
>>>> [glusterd-mgmt.c:243:gd_mgmt_v3_pre_validate_fn] 0-management: <br>
>>>> OP = 5.<br>
>>>> Returning 0<br>
>>>> [2019-09-25 05:17:26.616271] D [MSGID: 0]<br>
>>>> [glusterd-utils.c:1767:glusterd_volinfo_find] 0-management: Volume<br>
>>>> mdsgv01 found<br>
>>>> [2019-09-25 05:17:26.616305] D [MSGID: 0]<br>
>>>> [glusterd-utils.c:1774:glusterd_volinfo_find] 0-management: <br>
>>>> Returning 0<br>
>>>> [2019-09-25 05:17:26.616327] D [MSGID: 0]<br>
>>>> [glusterd-utils.c:6327:glusterd_brick_start] 0-management: <br>
>>>> returning 0<br>
>>>> [2019-09-25 05:17:26.617056] I<br>
>>>> [glusterd-utils.c:6312:glusterd_brick_start] 0-management: <br>
>>>> starting a<br>
>>>> fresh brick process for brick /mnt/p01-d01/glusterv01<br>
>>>> [2019-09-25 05:17:26.722717] E [MSGID: 106005]<br>
>>>> [glusterd-utils.c:6317:glusterd_brick_start] 0-management: <br>
>>>> Unable to<br>
>>>> start brick mdskvm-p01.nix.mds.xyz:/mnt/p01-d01/glusterv01<br>
>>>> [2019-09-25 05:17:26.722960] D [MSGID: 0]<br>
>>>> [glusterd-utils.c:6327:glusterd_brick_start] 0-management: <br>
>>>> returning<br>
>>>> -107<br>
>>>> [2019-09-25 05:17:26.723006] E [MSGID: 106122]<br>
>>>> [glusterd-mgmt.c:341:gd_mgmt_v3_commit_fn] 0-management: Volume <br>
>>>> start<br>
>>>> commit failed.<br>
>>>> [2019-09-25 05:17:26.723027] D [MSGID: 0]<br>
>>>> [glusterd-mgmt.c:444:gd_mgmt_v3_commit_fn] 0-management: OP = 5.<br>
>>>> Returning -107<br>
>>>> [2019-09-25 05:17:26.723045] E [MSGID: 106122]<br>
>>>> [glusterd-mgmt.c:1696:glusterd_mgmt_v3_commit] 0-management: Commit<br>
>>>> failed for operation Start on local node<br>
>>>> [2019-09-25 05:17:26.723073] D [MSGID: 0]<br>
>>>> [glusterd-op-sm.c:5106:glusterd_op_modify_op_ctx] 0-management: <br>
>>>> op_ctx<br>
>>>> modification not required<br>
>>>> [2019-09-25 05:17:26.723141] E [MSGID: 106122]<br>
>>>> [glusterd-mgmt.c:2466:glusterd_mgmt_v3_initiate_all_phases]<br>
>>>> 0-management: Commit Op Failed<br>
>>>> [2019-09-25 05:17:26.723204] D [MSGID: 0]<br>
>>>> [glusterd-locks.c:797:glusterd_mgmt_v3_unlock] 0-management: <br>
>>>> Trying to<br>
>>>> release lock of vol mdsgv01 for <br>
>>>> f7336db6-22b4-497d-8c2f-04c833a28546 as<br>
>>>> mdsgv01_vol<br>
>>>> [2019-09-25 05:17:26.723239] D [MSGID: 0]<br>
>>>> [glusterd-locks.c:846:glusterd_mgmt_v3_unlock] 0-management: <br>
>>>> Lock for<br>
>>>> vol mdsgv01 successfully released<br>
>>>> [2019-09-25 05:17:26.723273] D [MSGID: 0]<br>
>>>> [glusterd-utils.c:1767:glusterd_volinfo_find] 0-management: Volume<br>
>>>> mdsgv01 found<br>
>>>> [2019-09-25 05:17:26.723326] D [MSGID: 0]<br>
>>>> [glusterd-utils.c:1774:glusterd_volinfo_find] 0-management: <br>
>>>> Returning 0<br>
>>>> [2019-09-25 05:17:26.723360] D [MSGID: 0]<br>
>>>> [glusterd-locks.c:464:glusterd_multiple_mgmt_v3_unlock] <br>
>>>> 0-management:<br>
>>>> Returning 0<br>
>>>><br>
>>>> ==> /var/log/glusterfs/cmd_history.log <==<br>
>>>> [2019-09-25 05:17:26.723390] : volume start mdsgv01 : FAILED : <br>
>>>> Commit<br>
>>>> failed on localhost. Please check log file for details.<br>
>>>><br>
>>>> ==> /var/log/glusterfs/glusterd.log <==<br>
>>>> [2019-09-25 05:17:26.723479] D [MSGID: 0]<br>
>>>> [glusterd-rpc-ops.c:199:glusterd_op_send_cli_response] <br>
>>>> 0-management:<br>
>>>> Returning 0<br>
>>>><br>
>>>><br>
>>>><br>
>>>> [root@mdskvm-p01 glusterfs]# cat /etc/glusterfs/glusterd.vol<br>
>>>> volume management<br>
>>>> type mgmt/glusterd<br>
>>>> option working-directory /var/lib/glusterd<br>
>>>> option transport-type socket,rdma<br>
>>>> option transport.socket.keepalive-time 10<br>
>>>> option transport.socket.keepalive-interval 2<br>
>>>> option transport.socket.read-fail-log off<br>
>>>> option ping-timeout 0<br>
>>>> option event-threads 1<br>
>>>> option rpc-auth-allow-insecure on<br>
>>>> # option cluster.server-quorum-type server<br>
>>>> # option cluster.quorum-type auto<br>
>>>> option server.event-threads 8<br>
>>>> option client.event-threads 8<br>
>>>> option performance.write-behind-window-size 8MB<br>
>>>> option performance.io-thread-count 16<br>
>>>> option performance.cache-size 1GB<br>
>>>> option nfs.trusted-sync on<br>
>>>> option storage.owner-uid 36<br>
>>>> option storage.owner-uid 36<br>
>>>> option cluster.data-self-heal-algorithm full<br>
>>>> option performance.low-prio-threads 32<br>
>>>> option features.shard-block-size 512MB<br>
>>>> option features.shard on<br>
>>>> end-volume<br>
>>>> [root@mdskvm-p01 glusterfs]#<br>
>>>><br>
>>>><br>
>>>> [root@mdskvm-p01 glusterfs]# gluster volume info<br>
>>>><br>
>>>> Volume Name: mdsgv01<br>
>>>> Type: Replicate<br>
>>>> Volume ID: f5b57076-dbd4-4d77-ae13-c1f3ee3adbe0<br>
>>>> Status: Stopped<br>
>>>> Snapshot Count: 0<br>
>>>> Number of Bricks: 1 x 2 = 2<br>
>>>> Transport-type: tcp<br>
>>>> Bricks:<br>
>>>> Brick1: mdskvm-p02.nix.mds.xyz:/mnt/p02-d01/glusterv02<br>
>>>> Brick2: mdskvm-p01.nix.mds.xyz:/mnt/p01-d01/glusterv01<br>
>>>> Options Reconfigured:<br>
>>>> storage.owner-gid: 36<br>
>>>> cluster.data-self-heal-algorithm: full<br>
>>>> performance.low-prio-threads: 32<br>
>>>> features.shard-block-size: 512MB<br>
>>>> features.shard: on<br>
>>>> storage.owner-uid: 36<br>
>>>> cluster.server-quorum-type: none<br>
>>>> cluster.quorum-type: none<br>
>>>> server.event-threads: 8<br>
>>>> client.event-threads: 8<br>
>>>> performance.write-behind-window-size: 8MB<br>
>>>> performance.io-thread-count: 16<br>
>>>> performance.cache-size: 1GB<br>
>>>> nfs.trusted-sync: on<br>
>>>> server.allow-insecure: on<br>
>>>> performance.readdir-ahead: on<br>
>>>> diagnostics.brick-log-level: DEBUG<br>
>>>> diagnostics.brick-sys-log-level: INFO<br>
>>>> diagnostics.client-log-level: DEBUG<br>
>>>> [root@mdskvm-p01 glusterfs]#<br>
>>>><br>
>>>><br>
>>>> _______________________________________________<br>
>>>><br>
>>>> Community Meeting Calendar:<br>
>>>><br>
>>>> APAC Schedule -<br>
>>>> Every 2nd and 4th Tuesday at 11:30 AM IST<br>
>>>> Bridge: <a href="https://bluejeans.com/118564314" rel="noreferrer" target="_blank">https://bluejeans.com/118564314</a><br>
>>>><br>
>>>> NA/EMEA Schedule -<br>
>>>> Every 1st and 3rd Tuesday at 01:00 PM EDT<br>
>>>> Bridge: <a href="https://bluejeans.com/118564314" rel="noreferrer" target="_blank">https://bluejeans.com/118564314</a><br>
>>>><br>
>>>> Gluster-devel mailing list<br>
>>>> <a href="mailto:Gluster-devel@gluster.org" target="_blank">Gluster-devel@gluster.org</a> <mailto:<a href="mailto:Gluster-devel@gluster.org" target="_blank">Gluster-devel@gluster.org</a>><br>
>>>> <a href="https://lists.gluster.org/mailman/listinfo/gluster-devel" rel="noreferrer" target="_blank">https://lists.gluster.org/mailman/listinfo/gluster-devel</a><br>
>>>><br>
>>>><br>
>>>><br>
>>>> -- <br>
>>>> Thanks,<br>
>>>> Sanju<br>
>>><br>
>>><br>
>>><br>
>>> _______________________________________________<br>
>>><br>
>>> Community Meeting Calendar:<br>
>>><br>
>>> APAC Schedule -<br>
>>> Every 2nd and 4th Tuesday at 11:30 AM IST<br>
>>> Bridge: <a href="https://bluejeans.com/118564314" rel="noreferrer" target="_blank">https://bluejeans.com/118564314</a><br>
>>><br>
>>> NA/EMEA Schedule -<br>
>>> Every 1st and 3rd Tuesday at 01:00 PM EDT<br>
>>> Bridge: <a href="https://bluejeans.com/118564314" rel="noreferrer" target="_blank">https://bluejeans.com/118564314</a><br>
>>><br>
>>> Gluster-devel mailing list<br>
>>> <a href="mailto:Gluster-devel@gluster.org" target="_blank">Gluster-devel@gluster.org</a><br>
>>> <a href="https://lists.gluster.org/mailman/listinfo/gluster-devel" rel="noreferrer" target="_blank">https://lists.gluster.org/mailman/listinfo/gluster-devel</a><br>
>>><br>
>><br>
>><br>
>><br>
>> _______________________________________________<br>
>><br>
>> Community Meeting Calendar:<br>
>><br>
>> APAC Schedule -<br>
>> Every 2nd and 4th Tuesday at 11:30 AM IST<br>
>> Bridge: <a href="https://bluejeans.com/118564314" rel="noreferrer" target="_blank">https://bluejeans.com/118564314</a><br>
>><br>
>> NA/EMEA Schedule -<br>
>> Every 1st and 3rd Tuesday at 01:00 PM EDT<br>
>> Bridge: <a href="https://bluejeans.com/118564314" rel="noreferrer" target="_blank">https://bluejeans.com/118564314</a><br>
>><br>
>> Gluster-devel mailing list<br>
>> <a href="mailto:Gluster-devel@gluster.org" target="_blank">Gluster-devel@gluster.org</a><br>
>> <a href="https://lists.gluster.org/mailman/listinfo/gluster-devel" rel="noreferrer" target="_blank">https://lists.gluster.org/mailman/listinfo/gluster-devel</a><br>
>><br>
> <br>
> <br>
<br>
<br>
-- <br>
Thx,<br>
TK.<br>
</blockquote></div><br clear="all"><div><br></div>-- <br><div dir="ltr" class="gmail_signature"><div dir="ltr"><div>Thanks,<br></div>Sanju<br></div></div>