<div dir="ltr"><div><div>Looks like a bug as I see tier-enabled = 0 is an additional entry in the info file in shchhv01. As per the code, this field should be written into the glusterd store if the op-version is >= 30706 . What I am guessing is since we didn't have the commit 33f8703a1 "glusterd: regenerate volfiles on op-version bump up" in 3.8.4 while bumping up the op-version the info and volfiles were not regenerated which caused the tier-enabled entry to be missing in the info file.<br></div><div><br></div><div>For now, you can copy the info file for the volumes where the mismatch happened from shchhv01 to shchhv02 and restart glusterd service on shchhv02. That should fix up this temporarily. Unfortunately this step might need to be repeated for other nodes as well.<br></div><br></div><div>@Hari - Could you help in debugging this further.<br><br></div><br></div><div class="gmail_extra"><br><div class="gmail_quote">On Wed, Dec 20, 2017 at 10:44 AM, Gustave Dahl <span dir="ltr"><<a href="mailto:gustave@dahlfamily.net" target="_blank">gustave@dahlfamily.net</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">I was attempting the same on a local sandbox and also have the same problem.<br>
<br>
<br>
Current: 3.8.4<br>
<br>
Volume Name: shchst01<br>
Type: Distributed-Replicate<br>
Volume ID: bcd53e52-cde6-4e58-85f9-<wbr>71d230b7b0d3<br>
Status: Started<br>
Snapshot Count: 0<br>
Number of Bricks: 4 x 3 = 12<br>
Transport-type: tcp<br>
Bricks:<br>
Brick1: shchhv01-sto:/data/brick3/<wbr>shchst01<br>
Brick2: shchhv02-sto:/data/brick3/<wbr>shchst01<br>
Brick3: shchhv03-sto:/data/brick3/<wbr>shchst01<br>
Brick4: shchhv01-sto:/data/brick1/<wbr>shchst01<br>
Brick5: shchhv02-sto:/data/brick1/<wbr>shchst01<br>
Brick6: shchhv03-sto:/data/brick1/<wbr>shchst01<br>
Brick7: shchhv02-sto:/data/brick2/<wbr>shchst01<br>
Brick8: shchhv03-sto:/data/brick2/<wbr>shchst01<br>
Brick9: shchhv04-sto:/data/brick2/<wbr>shchst01<br>
Brick10: shchhv02-sto:/data/brick4/<wbr>shchst01<br>
Brick11: shchhv03-sto:/data/brick4/<wbr>shchst01<br>
Brick12: shchhv04-sto:/data/brick4/<wbr>shchst01<br>
Options Reconfigured:<br>
cluster.data-self-heal-<wbr>algorithm: full<br>
features.shard-block-size: 512MB<br>
features.shard: enable<br>
performance.readdir-ahead: on<br>
storage.owner-uid: 9869<br>
storage.owner-gid: 9869<br>
server.allow-insecure: on<br>
performance.quick-read: off<br>
performance.read-ahead: off<br>
performance.io-cache: off<br>
performance.stat-prefetch: off<br>
cluster.eager-lock: enable<br>
network.remote-dio: enable<br>
cluster.quorum-type: auto<br>
cluster.server-quorum-type: server<br>
cluster.self-heal-daemon: on<br>
nfs.disable: on<br>
performance.io-thread-count: 64<br>
performance.cache-size: 1GB<br>
<br>
Upgraded shchhv01-sto to 3.12.3, others remain at 3.8.4<br>
<br>
RESULT<br>
=====================<br>
Hostname: shchhv01-sto<br>
Uuid: f6205edb-a0ea-4247-9594-<wbr>c4cdc0d05816<br>
State: Peer Rejected (Connected)<br>
<br>
Upgraded Server: shchhv01-sto<br>
==============================<br>
[2017-12-20 05:02:44.747313] I [MSGID: 101190]<br>
[event-epoll.c:613:event_<wbr>dispatch_epoll_worker] 0-epoll: Started thread with<br>
index 1<br>
[2017-12-20 05:02:44.747387] I [MSGID: 101190]<br>
[event-epoll.c:613:event_<wbr>dispatch_epoll_worker] 0-epoll: Started thread with<br>
index 2<br>
[2017-12-20 05:02:44.749087] W [rpc-clnt-ping.c:246:rpc_clnt_<wbr>ping_cbk]<br>
0-management: RPC_CLNT_PING notify failed<br>
[2017-12-20 05:02:44.749165] W [rpc-clnt-ping.c:246:rpc_clnt_<wbr>ping_cbk]<br>
0-management: RPC_CLNT_PING notify failed<br>
[2017-12-20 05:02:44.749563] W [rpc-clnt-ping.c:246:rpc_clnt_<wbr>ping_cbk]<br>
0-management: RPC_CLNT_PING notify failed<br>
[2017-12-20 05:02:54.676324] I [MSGID: 106493]<br>
[glusterd-rpc-ops.c:486:__<wbr>glusterd_friend_add_cbk] 0-glusterd: Received RJT<br>
from uuid: 546503ae-ba0e-40d4-843f-<wbr>c5dbac22d272, host: shchhv02-sto, port: 0<br>
[2017-12-20 05:02:54.690237] I [MSGID: 106163]<br>
[glusterd-handshake.c:1316:__<wbr>glusterd_mgmt_hndsk_versions_<wbr>ack] 0-management:<br>
using the op-version 30800<br>
[2017-12-20 05:02:54.695823] I [MSGID: 106490]<br>
[glusterd-handler.c:2540:__<wbr>glusterd_handle_incoming_<wbr>friend_req] 0-glusterd:<br>
Received probe from uuid: 546503ae-ba0e-40d4-843f-<wbr>c5dbac22d272<br>
[2017-12-20 05:02:54.696956] E [MSGID: 106010]<br>
[glusterd-utils.c:3370:<wbr>glusterd_compare_friend_<wbr>volume] 0-management: Version<br>
of Cksums shchst01-sto differ. local cksum = 4218452135, remote cksum =<br>
2747317484 on peer shchhv02-sto<br>
[2017-12-20 05:02:54.697796] I [MSGID: 106493]<br>
[glusterd-handler.c:3800:<wbr>glusterd_xfer_friend_add_resp] 0-glusterd:<br>
Responded to shchhv02-sto (0), ret: 0, op_ret: -1<br>
[2017-12-20 05:02:55.033822] I [MSGID: 106493]<br>
[glusterd-rpc-ops.c:486:__<wbr>glusterd_friend_add_cbk] 0-glusterd: Received RJT<br>
from uuid: 3de22cb5-c1c1-4041-a1e1-<wbr>eb969afa9b4b, host: shchhv03-sto, port: 0<br>
[2017-12-20 05:02:55.038460] I [MSGID: 106163]<br>
[glusterd-handshake.c:1316:__<wbr>glusterd_mgmt_hndsk_versions_<wbr>ack] 0-management:<br>
using the op-version 30800<br>
[2017-12-20 05:02:55.040032] I [MSGID: 106490]<br>
[glusterd-handler.c:2540:__<wbr>glusterd_handle_incoming_<wbr>friend_req] 0-glusterd:<br>
Received probe from uuid: 3de22cb5-c1c1-4041-a1e1-<wbr>eb969afa9b4b<br>
[2017-12-20 05:02:55.040266] E [MSGID: 106010]<br>
[glusterd-utils.c:3370:<wbr>glusterd_compare_friend_<wbr>volume] 0-management: Version<br>
of Cksums shchst01-sto differ. local cksum = 4218452135, remote cksum =<br>
2747317484 on peer shchhv03-sto<br>
[2017-12-20 05:02:55.040405] I [MSGID: 106493]<br>
[glusterd-handler.c:3800:<wbr>glusterd_xfer_friend_add_resp] 0-glusterd:<br>
Responded to shchhv03-sto (0), ret: 0, op_ret: -1<br>
[2017-12-20 05:02:55.584854] I [MSGID: 106493]<br>
[glusterd-rpc-ops.c:486:__<wbr>glusterd_friend_add_cbk] 0-glusterd: Received RJT<br>
from uuid: 36306e37-d7f0-4fec-9140-<wbr>0d0f1bd2d2d5, host: shchhv04-sto, port: 0<br>
[2017-12-20 05:02:55.595125] I [MSGID: 106163]<br>
[glusterd-handshake.c:1316:__<wbr>glusterd_mgmt_hndsk_versions_<wbr>ack] 0-management:<br>
using the op-version 30800<br>
[2017-12-20 05:02:55.600804] I [MSGID: 106490]<br>
[glusterd-handler.c:2540:__<wbr>glusterd_handle_incoming_<wbr>friend_req] 0-glusterd:<br>
Received probe from uuid: 36306e37-d7f0-4fec-9140-<wbr>0d0f1bd2d2d5<br>
[2017-12-20 05:02:55.601288] E [MSGID: 106010]<br>
[glusterd-utils.c:3370:<wbr>glusterd_compare_friend_<wbr>volume] 0-management: Version<br>
of Cksums shchst01-sto differ. local cksum = 4218452135, remote cksum =<br>
2747317484 on peer shchhv04-sto<br>
[2017-12-20 05:02:55.601497] I [MSGID: 106493]<br>
[glusterd-handler.c:3800:<wbr>glusterd_xfer_friend_add_resp] 0-glusterd:<br>
Responded to shchhv04-sto (0), ret: 0, op_ret: -1<br>
<br>
Another Server: shchhv02-sto<br>
==============================<br>
[2017-12-20 05:02:44.667833] W<br>
[glusterd-locks.c:675:<wbr>glusterd_mgmt_v3_unlock]<br>
(-->/usr/lib64/glusterfs/3.8.<wbr>4/xlator/mgmt/glusterd.so(+<wbr>0x1de5c)<br>
[0x7f75fdc12e5c]<br>
-->/usr/lib64/glusterfs/3.8.4/<wbr>xlator/mgmt/glusterd.so(+<wbr>0x27a08)<br>
[0x7f75fdc1ca08]<br>
-->/usr/lib64/glusterfs/3.8.4/<wbr>xlator/mgmt/glusterd.so(+<wbr>0xd07fa)<br>
[0x7f75fdcc57fa] ) 0-management: Lock for vol shchst01-sto not held<br>
[2017-12-20 05:02:44.667795] I [MSGID: 106004]<br>
[glusterd-handler.c:5219:__<wbr>glusterd_peer_rpc_notify] 0-management: Peer<br>
<shchhv01-sto> (<f6205edb-a0ea-4247-9594-<wbr>c4cdc0d05816>), in state <Peer<br>
Rejected>, has disconnected from glusterd.<br>
[2017-12-20 05:02:44.667948] W [MSGID: 106118]<br>
[glusterd-handler.c:5241:__<wbr>glusterd_peer_rpc_notify] 0-management: Lock not<br>
released for shchst01-sto<br>
[2017-12-20 05:02:44.760103] I [MSGID: 106163]<br>
[glusterd-handshake.c:1271:__<wbr>glusterd_mgmt_hndsk_versions_<wbr>ack] 0-management:<br>
using the op-version 30800<br>
[2017-12-20 05:02:44.765389] I [MSGID: 106490]<br>
[glusterd-handler.c:2608:__<wbr>glusterd_handle_incoming_<wbr>friend_req] 0-glusterd:<br>
Received probe from uuid: f6205edb-a0ea-4247-9594-<wbr>c4cdc0d05816<br>
[2017-12-20 05:02:54.686185] E [MSGID: 106010]<br>
[glusterd-utils.c:2930:<wbr>glusterd_compare_friend_<wbr>volume] 0-management: Version<br>
of Cksums shchst01 differ. local cksum = 2747317484, remote cksum =<br>
4218452135 on peer shchhv01-sto<br>
[2017-12-20 05:02:54.686882] I [MSGID: 106493]<br>
[glusterd-handler.c:3852:<wbr>glusterd_xfer_friend_add_resp] 0-glusterd:<br>
Responded to shchhv01-sto (0), ret: 0, op_ret: -1<br>
[2017-12-20 05:02:54.717854] I [MSGID: 106493]<br>
[glusterd-rpc-ops.c:476:__<wbr>glusterd_friend_add_cbk] 0-glusterd: Received RJT<br>
from uuid: f6205edb-a0ea-4247-9594-<wbr>c4cdc0d05816, host: shchhv01-sto, port: 0<br>
<br>
Another Server: shchhv04-sto<br>
==============================<br>
[2017-12-20 05:02:44.667620] I [MSGID: 106004]<br>
[glusterd-handler.c:5219:__<wbr>glusterd_peer_rpc_notify] 0-management: Peer<br>
<shchhv01-sto> (<f6205edb-a0ea-4247-9594-<wbr>c4cdc0d05816>), in state <Peer<br>
Rejected>, has disconnected from glusterd.<br>
[2017-12-20 05:02:44.667808] W<br>
[glusterd-locks.c:675:<wbr>glusterd_mgmt_v3_unlock]<br>
(-->/usr/lib64/glusterfs/3.8.<wbr>4/xlator/mgmt/glusterd.so(+<wbr>0x1de5c)<br>
[0x7f10a33d9e5c]<br>
-->/usr/lib64/glusterfs/3.8.4/<wbr>xlator/mgmt/glusterd.so(+<wbr>0x27a08)<br>
[0x7f10a33e3a08]<br>
-->/usr/lib64/glusterfs/3.8.4/<wbr>xlator/mgmt/glusterd.so(+<wbr>0xd07fa)<br>
[0x7f10a348c7fa] ) 0-management: Lock for vol shchst01-sto not held<br>
[2017-12-20 05:02:44.667827] W [MSGID: 106118]<br>
[glusterd-handler.c:5241:__<wbr>glusterd_peer_rpc_notify] 0-management: Lock not<br>
released for shchst01-sto<br>
[2017-12-20 05:02:44.760077] I [MSGID: 106163]<br>
[glusterd-handshake.c:1271:__<wbr>glusterd_mgmt_hndsk_versions_<wbr>ack] 0-management:<br>
using the op-version 30800<br>
[2017-12-20 05:02:44.768796] I [MSGID: 106490]<br>
[glusterd-handler.c:2608:__<wbr>glusterd_handle_incoming_<wbr>friend_req] 0-glusterd:<br>
Received probe from uuid: f6205edb-a0ea-4247-9594-<wbr>c4cdc0d05816<br>
[2017-12-20 05:02:55.595095] E [MSGID: 106010]<br>
[glusterd-utils.c:2930:<wbr>glusterd_compare_friend_<wbr>volume] 0-management: Version<br>
of Cksums shchst01-sto differ. local cksum = 2747317484, remote cksum =<br>
4218452135 on peer shchhv01-sto<br>
[2017-12-20 05:02:55.595273] I [MSGID: 106493]<br>
[glusterd-handler.c:3852:<wbr>glusterd_xfer_friend_add_resp] 0-glusterd:<br>
Responded to shchhv01-sto (0), ret: 0, op_ret: -1<br>
[2017-12-20 05:02:55.612957] I [MSGID: 106493]<br>
[glusterd-rpc-ops.c:476:__<wbr>glusterd_friend_add_cbk] 0-glusterd: Received RJT<br>
from uuid: f6205edb-a0ea-4247-9594-<wbr>c4cdc0d05816, host: shchhv01-sto, port: 0<br>
<br>
<vol>/info<br>
<br>
Upgraded Server: shchst01-sto<br>
=========================<br>
type=2<br>
count=12<br>
status=1<br>
sub_count=3<br>
stripe_count=1<br>
replica_count=3<br>
disperse_count=0<br>
redundancy_count=0<br>
version=52<br>
transport-type=0<br>
volume-id=bcd53e52-cde6-4e58-<wbr>85f9-71d230b7b0d3<br>
username=5a4ae8d8-dbcb-408e-<wbr>ab73-629255c14ffc<br>
password=58652573-0955-4d00-<wbr>893a-9f42d0f16717<br>
op-version=30700<br>
client-op-version=30700<br>
quota-version=0<br>
tier-enabled=0<br>
parent_volname=N/A<br>
restored_from_snap=00000000-<wbr>0000-0000-0000-000000000000<br>
snap-max-hard-limit=256<br>
cluster.data-self-heal-<wbr>algorithm=full<br>
features.shard-block-size=<wbr>512MB<br>
features.shard=enable<br>
nfs.disable=on<br>
cluster.self-heal-daemon=on<br>
cluster.server-quorum-type=<wbr>server<br>
cluster.quorum-type=auto<br>
network.remote-dio=enable<br>
cluster.eager-lock=enable<br>
performance.stat-prefetch=off<br>
performance.io-cache=off<br>
performance.read-ahead=off<br>
performance.quick-read=off<br>
server.allow-insecure=on<br>
storage.owner-gid=9869<br>
storage.owner-uid=9869<br>
performance.readdir-ahead=on<br>
performance.io-thread-count=64<br>
performance.cache-size=1GB<br>
brick-0=shchhv01-sto:-data-<wbr>brick3-shchst01<br>
brick-1=shchhv02-sto:-data-<wbr>brick3-shchst01<br>
brick-2=shchhv03-sto:-data-<wbr>brick3-shchst01<br>
brick-3=shchhv01-sto:-data-<wbr>brick1-shchst01<br>
brick-4=shchhv02-sto:-data-<wbr>brick1-shchst01<br>
brick-5=shchhv03-sto:-data-<wbr>brick1-shchst01<br>
brick-6=shchhv02-sto:-data-<wbr>brick2-shchst01<br>
brick-7=shchhv03-sto:-data-<wbr>brick2-shchst01<br>
brick-8=shchhv04-sto:-data-<wbr>brick2-shchst01<br>
brick-9=shchhv02-sto:-data-<wbr>brick4-shchst01<br>
brick-10=shchhv03-sto:-data-<wbr>brick4-shchst01<br>
brick-11=shchhv04-sto:-data-<wbr>brick4-shchst01<br>
<br>
Another Server: shchhv02-sto<br>
==============================<br>
type=2<br>
count=12<br>
status=1<br>
sub_count=3<br>
stripe_count=1<br>
replica_count=3<br>
disperse_count=0<br>
redundancy_count=0<br>
version=52<br>
transport-type=0<br>
volume-id=bcd53e52-cde6-4e58-<wbr>85f9-71d230b7b0d3<br>
username=5a4ae8d8-dbcb-408e-<wbr>ab73-629255c14ffc<br>
password=58652573-0955-4d00-<wbr>893a-9f42d0f16717<br>
op-version=30700<br>
client-op-version=30700<br>
quota-version=0<br>
parent_volname=N/A<br>
restored_from_snap=00000000-<wbr>0000-0000-0000-000000000000<br>
snap-max-hard-limit=256<br>
cluster.data-self-heal-<wbr>algorithm=full<br>
features.shard-block-size=<wbr>512MB<br>
features.shard=enable<br>
performance.readdir-ahead=on<br>
storage.owner-uid=9869<br>
storage.owner-gid=9869<br>
server.allow-insecure=on<br>
performance.quick-read=off<br>
performance.read-ahead=off<br>
performance.io-cache=off<br>
performance.stat-prefetch=off<br>
cluster.eager-lock=enable<br>
network.remote-dio=enable<br>
cluster.quorum-type=auto<br>
cluster.server-quorum-type=<wbr>server<br>
cluster.self-heal-daemon=on<br>
nfs.disable=on<br>
performance.io-thread-count=64<br>
performance.cache-size=1GB<br>
brick-0=shchhv01-sto:-data-<wbr>brick3-shchst01<br>
brick-1=shchhv02-sto:-data-<wbr>brick3-shchst01<br>
brick-2=shchhv03-sto:-data-<wbr>brick3-shchst01<br>
brick-3=shchhv01-sto:-data-<wbr>brick1-shchst01<br>
brick-4=shchhv02-sto:-data-<wbr>brick1-shchst01<br>
brick-5=shchhv03-sto:-data-<wbr>brick1-shchst01<br>
brick-6=shchhv02-sto:-data-<wbr>brick2-shchst01<br>
brick-7=shchhv03-sto:-data-<wbr>brick2-shchst01<br>
brick-8=shchhv04-sto:-data-<wbr>brick2-shchst01<br>
brick-9=shchhv02-sto:-data-<wbr>brick4-shchst01<br>
brick-10=shchhv03-sto:-data-<wbr>brick4-shchst01<br>
brick-11=shchhv04-sto:-data-<wbr>brick4-shchst01<br>
<br>
NOTE<br>
<br>
[root@shchhv01 shchst01]# gluster volume get shchst01 cluster.op-version<br>
Warning: Support to get global option value using `volume get <volname>`<br>
will be deprecated from next release. Consider using `volume get all`<br>
instead for global options<br>
Option Value<br>
<br>
------ -----<br>
<br>
cluster.op-version 30800<br>
<br>
[root@shchhv02 shchst01]# gluster volume get shchst01 cluster.op-version<br>
Option Value<br>
<br>
------ -----<br>
<br>
cluster.op-version 30800<br>
<div class="HOEnZb"><div class="h5"><br>
-----Original Message-----<br>
From: <a href="mailto:gluster-users-bounces@gluster.org">gluster-users-bounces@gluster.<wbr>org</a><br>
[mailto:<a href="mailto:gluster-users-bounces@gluster.org">gluster-users-bounces@<wbr>gluster.org</a>] On Behalf Of Ziemowit Pierzycki<br>
Sent: Tuesday, December 19, 2017 3:56 PM<br>
To: gluster-users <<a href="mailto:gluster-users@gluster.org">gluster-users@gluster.org</a>><br>
Subject: Re: [Gluster-users] Upgrading from Gluster 3.8 to 3.12<br>
<br>
I have not done the upgrade yet. Since this is a production cluster I need<br>
to make sure it stays up or schedule some downtime if it doesn't doesn't.<br>
Thanks.<br>
<br>
On Tue, Dec 19, 2017 at 10:11 AM, Atin Mukherjee <<a href="mailto:amukherj@redhat.com">amukherj@redhat.com</a>><br>
wrote:<br>
><br>
><br>
> On Tue, Dec 19, 2017 at 1:10 AM, Ziemowit Pierzycki<br>
> <<a href="mailto:ziemowit@pierzycki.com">ziemowit@pierzycki.com</a>><br>
> wrote:<br>
>><br>
>> Hi,<br>
>><br>
>> I have a cluster of 10 servers all running Fedora 24 along with<br>
>> Gluster 3.8. I'm planning on doing rolling upgrades to Fedora 27<br>
>> with Gluster 3.12. I saw the documentation and did some testing but<br>
>> I would like to run my plan through some (more?) educated minds.<br>
>><br>
>> The current setup is:<br>
>><br>
>> Volume Name: vol0<br>
>> Distributed-Replicate<br>
>> Number of Bricks: 2 x (2 + 1) = 6<br>
>> Bricks:<br>
>> Brick1: glt01:/vol/vol0<br>
>> Brick2: glt02:/vol/vol0<br>
>> Brick3: glt05:/vol/vol0 (arbiter)<br>
>> Brick4: glt03:/vol/vol0<br>
>> Brick5: glt04:/vol/vol0<br>
>> Brick6: glt06:/vol/vol0 (arbiter)<br>
>><br>
>> Volume Name: vol1<br>
>> Distributed-Replicate<br>
>> Number of Bricks: 2 x (2 + 1) = 6<br>
>> Bricks:<br>
>> Brick1: glt07:/vol/vol1<br>
>> Brick2: glt08:/vol/vol1<br>
>> Brick3: glt05:/vol/vol1 (arbiter)<br>
>> Brick4: glt09:/vol/vol1<br>
>> Brick5: glt10:/vol/vol1<br>
>> Brick6: glt06:/vol/vol1 (arbiter)<br>
>><br>
>> After performing the upgrade because of differences in checksums, the<br>
>> upgraded nodes will become:<br>
>><br>
>> State: Peer Rejected (Connected)<br>
><br>
><br>
> Have you upgraded all the nodes? If yes, have you bumped up the<br>
> cluster.op-version after upgrading all the nodes? Please follow :<br>
> <a href="http://docs.gluster.org/en/latest/Upgrade-Guide/op_version/" rel="noreferrer" target="_blank">http://docs.gluster.org/en/<wbr>latest/Upgrade-Guide/op_<wbr>version/</a> for more<br>
> details on how to bump up the cluster.op-version. In case you have<br>
> done all of these and you're seeing a checksum issue then I'm afraid<br>
> you have hit a bug. I'd need further details like the checksum<br>
> mismatch error from glusterd.log file along with the the exact<br>
> volume's info file from /var/lib/glusterd/vols/<<wbr>volname>/info between<br>
> both the peers to debug this further.<br>
><br>
>><br>
>> If I start doing the upgrades one at a time, with nodes glt10 to<br>
>> glt01 except for the arbiters glt05 and glt06, and then upgrading the<br>
>> arbiters last, everything should remain online at all times through<br>
>> the process. Correct?<br>
>><br>
>> Thanks.<br>
>> ______________________________<wbr>_________________<br>
>> Gluster-users mailing list<br>
>> <a href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a><br>
>> <a href="http://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">http://lists.gluster.org/<wbr>mailman/listinfo/gluster-users</a><br>
><br>
><br>
______________________________<wbr>_________________<br>
Gluster-users mailing list<br>
<a href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a><br>
<a href="http://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">http://lists.gluster.org/<wbr>mailman/listinfo/gluster-users</a><br>
<br>
______________________________<wbr>_________________<br>
Gluster-users mailing list<br>
<a href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a><br>
<a href="http://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">http://lists.gluster.org/<wbr>mailman/listinfo/gluster-users</a><br>
</div></div></blockquote></div><br></div>