<div dir="ltr"><div>Hi Ravi,</div><div><br></div><div>Thank you, that seems to have resolved the issue. After doing this, "gluster volume status all" showed gfs3 as online with a port and pid, however "gluster volume status all" didn't show any sync activity happening. At this point we loaded gfs3 with new firewall rules which explicitly allowed access from gfs1 and gfs2, and then "gluster volume status all" showed the file syncing. The gfs3 server should have allow access from gfs1 and gfs2 anyway by default, however I now believe that perhaps this wasn't the case, and maybe it was a firewall issue all along.</div><div><br></div><div>Thanks for all your help.</div><div><br></div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Sat, 25 May 2019 at 01:49, Ravishankar N <<a href="mailto:ravishankar@redhat.com">ravishankar@redhat.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<div bgcolor="#FFFFFF">
<p>Hi David,<br>
</p>
<div class="gmail-m_-259860184991938039moz-cite-prefix">On 23/05/19 3:54 AM, David Cunningham
wrote:<br>
</div>
<blockquote type="cite">
<div dir="ltr">
<div dir="ltr">
<div dir="ltr">
<div>Hi Ravi,</div>
<div><br>
</div>
<div>Please see the log attached. </div>
</div>
</div>
</div>
</blockquote>
When I <tt>grep -E "Connected to |disconnected from"
gvol0-add-brick-mount.log</tt>, I don't see a "Connected to
gvol0-client-1". It looks like this temporary mount is not able to
connect to the 2nd brick, which is why the lookup is failing due to
lack of quorum.<br>
<blockquote type="cite">
<div dir="ltr">
<div dir="ltr">
<div dir="ltr">
<div>The output of "gluster volume status" is as follows.
Should there be something listening on gfs3? I'm not sure
whether it having TCP Port and Pid as N/A is a symptom or
cause. Thank you.</div>
<div><br>
</div>
<div># gluster volume status<br>
Status of volume: gvol0<br>
Gluster process TCP Port RDMA
Port Online Pid<br>
------------------------------------------------------------------------------<br>
Brick gfs1:/nodirectwritedata/gluster/gvol0 49152
0 Y 7706 <br>
Brick gfs2:/nodirectwritedata/gluster/gvol0 49152
0 Y 7624 <br>
Brick gfs3:/nodirectwritedata/gluster/gvol0 N/A
N/A N N/A <br>
</div>
</div>
</div>
</div>
</blockquote>
<p>Can you see if the following steps help?<br>
</p>
<p>1. Do a <tt>`setfattr -n trusted.afr.gvol0-client-2 -v
0x000000000000000100000001 /nodirectwritedata/gluster/gvol0`</tt>
on <b>both</b> gfs1 and gfs2.</p>
<p>2. <tt>'gluster volume start gvol0 force`</tt></p>
<p>3. Check if Brick-3 now comes online with a valid TCP port and
PID. If it doesn't, check the brick log under
/var/log/glusterfs/bricks on gfs3 to see why.</p>
<p>Thanks,</p>
<p>Ravi<br>
</p>
<p><br>
</p>
<blockquote type="cite">
<div dir="ltr">
<div dir="ltr">
<div dir="ltr">
<div>Self-heal Daemon on localhost N/A
N/A Y 19853<br>
Self-heal Daemon on gfs1 N/A
N/A Y 28600<br>
Self-heal Daemon on gfs2 N/A
N/A Y 17614<br>
<br>
Task Status of Volume gvol0<br>
------------------------------------------------------------------------------<br>
There are no active volume tasks<br>
<br>
</div>
</div>
</div>
</div>
<br>
<div class="gmail_quote">
<div dir="ltr" class="gmail_attr">On Wed, 22 May 2019 at 18:06,
Ravishankar N <<a href="mailto:ravishankar@redhat.com" target="_blank">ravishankar@redhat.com</a>> wrote:<br>
</div>
<blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<div bgcolor="#FFFFFF">
<p>If you are trying this again, please 'gluster volume set
$volname client-log-level DEBUG`before attempting the
add-brick and attach the gvol0-add-brick-mount.log here.
After that, you can change the client-log-level back to
INFO.</p>
<p>-Ravi<br>
</p>
<div class="gmail-m_-259860184991938039gmail-m_5909859710443882225moz-cite-prefix">On
22/05/19 11:32 AM, Ravishankar N wrote:<br>
</div>
<blockquote type="cite">
<p><br>
</p>
<div class="gmail-m_-259860184991938039gmail-m_5909859710443882225moz-cite-prefix">On
22/05/19 11:23 AM, David Cunningham wrote:<br>
</div>
<blockquote type="cite">
<div dir="ltr">
<div>Hi Ravi,</div>
<div><br>
</div>
<div>I'd already done exactly that before, where step
3 was a simple 'rm -rf
/nodirectwritedata/gluster/gvol0'. Have you another
suggestion on what the cleanup or reformat should
be?</div>
</div>
</blockquote>
`rm -rf /nodirectwritedata/gluster/gvol0` does look okay
to me David. Basically, '/nodirectwritedata/gluster/gvol0'
must be empty and must not have any extended attributes
set on it. Why fuse_first_lookup() is failing is a bit of
a mystery to me at this point. <span class="gmail-m_-259860184991938039gmail-m_5909859710443882225moz-smiley-s2"><span>:-(</span></span><br>
Regards,<br>
Ravi<br>
<blockquote type="cite">
<div dir="ltr">
<div><br>
</div>
<div>Thank you.</div>
<div><br>
</div>
</div>
<br>
<div class="gmail_quote">
<div dir="ltr" class="gmail_attr">On Wed, 22 May 2019
at 13:56, Ravishankar N <<a href="mailto:ravishankar@redhat.com" target="_blank">ravishankar@redhat.com</a>>
wrote:<br>
</div>
<blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<div bgcolor="#FFFFFF">
<p>Hmm, so the volume info seems to indicate that
the add-brick was successful but the gfid xattr
is missing on the new brick (as are the actual
files, barring the .glusterfs folder, according
to your previous mail).</p>
<p>Do you want to try removing and adding it
again?<br>
</p>
<p>1. `gluster volume remove-brick gvol0 replica 2
gfs3:/nodirectwritedata/gluster/gvol0 force`
from gfs1<br>
</p>
<p>2. Check that gluster volume info is now back
to a 1x2 volume on all nodes and `gluster peer
status` is connected on all nodes.<br>
</p>
<p>3. Cleanup or reformat
'/nodirectwritedata/gluster/gvol0' on gfs3.<br>
</p>
<p>4. `gluster volume add-brick gvol0 replica 3
arbiter 1 gfs3:/nodirectwritedata/gluster/gvol0`
from gfs1.</p>
<p>5. Check that the files are getting healed on
to the new brick.<br>
</p>
Thanks,<br>
Ravi<br>
<div class="gmail-m_-259860184991938039gmail-m_5909859710443882225gmail-m_7826856973782339880moz-cite-prefix">On
22/05/19 6:50 AM, David Cunningham wrote:<br>
</div>
<blockquote type="cite">
<div dir="ltr">
<div dir="ltr">
<div dir="ltr">
<div dir="ltr">
<div dir="ltr">
<div dir="ltr">
<div>Hi Ravi,</div>
<div><br>
</div>
<div>Certainly. On the existing two
nodes:</div>
<div><br>
</div>
<div>gfs1 # getfattr -d -m. -e hex
/nodirectwritedata/gluster/gvol0<br>
getfattr: Removing leading '/'
from absolute path names<br>
# file:
nodirectwritedata/gluster/gvol0<br>
trusted.afr.dirty=0x000000000000000000000000<br>
trusted.afr.gvol0-client-2=0x000000000000000000000000<br>
trusted.gfid=0x00000000000000000000000000000001<br>
trusted.glusterfs.dht=0x000000010000000000000000ffffffff<br>
trusted.glusterfs.volume-id=0xfb5af69e1c3e41648b23c1d7bec9b1b6<br>
</div>
<div><br>
</div>
<div>gfs2 # getfattr -d -m. -e hex
/nodirectwritedata/gluster/gvol0<br>
getfattr: Removing leading '/'
from absolute path names<br>
# file:
nodirectwritedata/gluster/gvol0<br>
trusted.afr.dirty=0x000000000000000000000000<br>
trusted.afr.gvol0-client-0=0x000000000000000000000000<br>
trusted.afr.gvol0-client-2=0x000000000000000000000000<br>
trusted.gfid=0x00000000000000000000000000000001<br>
trusted.glusterfs.dht=0x000000010000000000000000ffffffff<br>
trusted.glusterfs.volume-id=0xfb5af69e1c3e41648b23c1d7bec9b1b6<br>
</div>
<div><br>
</div>
<div>On the new node:</div>
<div><br>
</div>
<div>gfs3 # getfattr -d -m. -e hex
/nodirectwritedata/gluster/gvol0<br>
getfattr: Removing leading '/'
from absolute path names<br>
# file:
nodirectwritedata/gluster/gvol0<br>
trusted.afr.dirty=0x000000000000000000000001<br>
trusted.glusterfs.volume-id=0xfb5af69e1c3e41648b23c1d7bec9b1b6<br>
</div>
<div><br>
</div>
<div>Output of "gluster volume info"
is the same on all 3 nodes and is:</div>
<div><br>
</div>
<div># gluster volume info<br>
<br>
Volume Name: gvol0<br>
Type: Replicate<br>
Volume ID:
fb5af69e-1c3e-4164-8b23-c1d7bec9b1b6<br>
Status: Started<br>
Snapshot Count: 0<br>
Number of Bricks: 1 x (2 + 1) = 3<br>
Transport-type: tcp<br>
Bricks:<br>
Brick1:
gfs1:/nodirectwritedata/gluster/gvol0<br>
Brick2:
gfs2:/nodirectwritedata/gluster/gvol0<br>
Brick3:
gfs3:/nodirectwritedata/gluster/gvol0
(arbiter)<br>
Options Reconfigured:<br>
performance.client-io-threads: off<br>
nfs.disable: on<br>
transport.address-family: inet<br>
<br>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
<br>
<div class="gmail_quote">
<div dir="ltr" class="gmail_attr">On Wed, 22
May 2019 at 12:43, Ravishankar N <<a href="mailto:ravishankar@redhat.com" target="_blank">ravishankar@redhat.com</a>>
wrote:<br>
</div>
<blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<div bgcolor="#FFFFFF"> Hi David,<br>
Could you provide the `getfattr -d -m. -e
hex /nodirectwritedata/gluster/gvol0`
output of all bricks and the output of
`gluster volume info`?<br>
<br>
Thanks,<br>
Ravi<br>
<div class="gmail-m_-259860184991938039gmail-m_5909859710443882225gmail-m_7826856973782339880gmail-m_4703823630811393419moz-cite-prefix">On
22/05/19 4:57 AM, David Cunningham
wrote:<br>
</div>
<blockquote type="cite">
<div dir="ltr">
<div dir="ltr">
<div dir="ltr">
<div dir="ltr">
<div dir="ltr">
<div dir="ltr">
<div>Hi Sanju,</div>
<div><br>
</div>
<div>Here's what
glusterd.log says on the
new arbiter server when
trying to add the node:</div>
<div><br>
</div>
<div>[2019-05-22
00:15:05.963059] I
[run.c:242:runner_log]
(-->/usr/lib64/glusterfs/5.6/xlator/mgmt/glusterd.so(+0x3b2cd)
[0x7fe4ca9102cd]
-->/usr/lib64/glusterfs/5.6/xlator/mgmt/glusterd.so(+0xe6b85)
[0x7fe4ca9bbb85]
-->/lib64/libglusterfs.so.0(runner_log+0x115)
[0x7fe4d5ecc955] )
0-management: Ran script:
/var/lib/glusterd/hooks/1/add-brick/pre/S28Quota-enable-root-xattr-heal.sh
--volname=gvol0
--version=1
--volume-op=add-brick
--gd-workdir=/var/lib/glusterd<br>
[2019-05-22
00:15:05.963177] I [MSGID:
106578]
[glusterd-brick-ops.c:1355:glusterd_op_perform_add_bricks]
0-management:
replica-count is set 3<br>
[2019-05-22
00:15:05.963228] I [MSGID:
106578]
[glusterd-brick-ops.c:1360:glusterd_op_perform_add_bricks]
0-management:
arbiter-count is set 1<br>
[2019-05-22
00:15:05.963257] I [MSGID:
106578]
[glusterd-brick-ops.c:1364:glusterd_op_perform_add_bricks]
0-management: type is set
0, need to change it<br>
[2019-05-22
00:15:17.015268] E [MSGID:
106053]
[glusterd-utils.c:13942:glusterd_handle_replicate_brick_ops]
0-management: Failed to
set extended attribute
trusted.add-brick :
Transport endpoint is not
connected [Transport
endpoint is not connected]<br>
[2019-05-22
00:15:17.036479] E [MSGID:
106073]
[glusterd-brick-ops.c:2595:glusterd_op_add_brick]
0-glusterd: Unable to add
bricks<br>
[2019-05-22
00:15:17.036595] E [MSGID:
106122]
[glusterd-mgmt.c:299:gd_mgmt_v3_commit_fn]
0-management: Add-brick
commit failed.<br>
[2019-05-22
00:15:17.036710] E [MSGID:
106122]
[glusterd-mgmt-handler.c:594:glusterd_handle_commit_fn]
0-management: commit
failed on operation Add
brick<br>
</div>
<div><br>
</div>
<div>As before
gvol0-add-brick-mount.log
said:</div>
<div><br>
</div>
<div>[2019-05-22
00:15:17.005695] I
[fuse-bridge.c:4267:fuse_init]
0-glusterfs-fuse: FUSE
inited with protocol
versions: glusterfs 7.24
kernel 7.22<br>
[2019-05-22
00:15:17.005749] I
[fuse-bridge.c:4878:fuse_graph_sync]
0-fuse: switched to graph
0<br>
[2019-05-22
00:15:17.010101] E
[fuse-bridge.c:4336:fuse_first_lookup]
0-fuse: first lookup on
root failed (Transport
endpoint is not connected)<br>
[2019-05-22
00:15:17.014217] W
[fuse-bridge.c:897:fuse_attr_cbk]
0-glusterfs-fuse: 2:
LOOKUP() / => -1
(Transport endpoint is not
connected)<br>
[2019-05-22
00:15:17.015097] W
[fuse-resolve.c:127:fuse_resolve_gfid_cbk]
0-fuse:
00000000-0000-0000-0000-000000000001:
failed to resolve
(Transport endpoint is not
connected)<br>
[2019-05-22
00:15:17.015158] W
[fuse-bridge.c:3294:fuse_setxattr_resume]
0-glusterfs-fuse: 3:
SETXATTR
00000000-0000-0000-0000-000000000001/1
(trusted.add-brick)
resolution failed<br>
[2019-05-22
00:15:17.035636] I
[fuse-bridge.c:5144:fuse_thread_proc]
0-fuse: initating unmount
of /tmp/mntYGNbj9<br>
[2019-05-22
00:15:17.035854] W
[glusterfsd.c:1500:cleanup_and_exit]
(-->/lib64/libpthread.so.0(+0x7dd5) [0x7f7745ccedd5]
-->/usr/sbin/glusterfs(glusterfs_sigwaiter+0xe5)
[0x55c81b63de75]
-->/usr/sbin/glusterfs(cleanup_and_exit+0x6b)
[0x55c81b63dceb] ) 0-:
received signum (15),
shutting down<br>
[2019-05-22
00:15:17.035942] I
[fuse-bridge.c:5914:fini]
0-fuse: Unmounting
'/tmp/mntYGNbj9'.<br>
[2019-05-22
00:15:17.035966] I
[fuse-bridge.c:5919:fini]
0-fuse: Closing fuse
connection to
'/tmp/mntYGNbj9'.<br>
</div>
<div><br>
</div>
<div>Here are the processes
running on the new arbiter
server:</div>
<div># ps -ef | grep gluster<br>
root 3466 1 0
20:13 ? 00:00:00
/usr/sbin/glusterfs -s
localhost --volfile-id
gluster/glustershd -p
/var/run/gluster/glustershd/glustershd.pid -l
/var/log/glusterfs/glustershd.log
-S
/var/run/gluster/24c12b09f93eec8e.socket
--xlator-option
*replicate*.node-uuid=2069cfb3-c798-47e3-8cf8-3c584cf7c412
--process-name glustershd<br>
root 6832 1 0
May16 ? 00:02:10
/usr/sbin/glusterd -p
/var/run/glusterd.pid
--log-level INFO<br>
root 17841 1 0
May16 ? 00:00:58
/usr/sbin/glusterfs
--process-name fuse
--volfile-server=gfs1
--volfile-id=/gvol0
/mnt/glusterfs<br>
</div>
<div><br>
</div>
<div>Here are the files
created on the new arbiter
server:</div>
<div># find
/nodirectwritedata/gluster/gvol0
| xargs ls -ald<br>
drwxr-xr-x 3 root root
4096 May 21 20:15
/nodirectwritedata/gluster/gvol0<br>
drw------- 2 root root
4096 May 21 20:15
/nodirectwritedata/gluster/gvol0/.glusterfs<br>
</div>
<div><br>
</div>
<div>Thank you for your
help!</div>
<div><br>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
<br>
<div class="gmail_quote">
<div dir="ltr" class="gmail_attr">On
Tue, 21 May 2019 at 00:10, Sanju
Rakonde <<a href="mailto:srakonde@redhat.com" target="_blank">srakonde@redhat.com</a>>
wrote:<br>
</div>
<blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<div dir="ltr">David,
<div><br>
</div>
<div>can you please attach
glusterd.logs? As the error
message says, Commit failed on
the arbitar node, we might be
able to find some issue on that
node.</div>
</div>
<br>
<div class="gmail_quote">
<div dir="ltr" class="gmail_attr">On
Mon, May 20, 2019 at 10:10 AM
Nithya Balachandran <<a href="mailto:nbalacha@redhat.com" target="_blank">nbalacha@redhat.com</a>>
wrote:<br>
</div>
<blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<div dir="ltr">
<div dir="ltr"><br>
</div>
<br>
<div class="gmail_quote">
<div dir="ltr" class="gmail_attr">On Fri,
17 May 2019 at 06:01,
David Cunningham <<a href="mailto:dcunningham@voisonics.com" target="_blank">dcunningham@voisonics.com</a>>
wrote:<br>
</div>
<blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<div dir="ltr">
<div dir="ltr">
<div dir="ltr">
<div dir="ltr">
<div dir="ltr">
<div dir="ltr">
<div dir="ltr">
<div dir="ltr">
<div>Hello,</div>
<div><br>
</div>
<div>We're
adding an
arbiter node
to an existing
volume and
having an
issue. Can
anyone help?
The root cause
error appears
to be
"00000000-0000-0000-0000-000000000001: failed to resolve (Transport
endpoint is
not
connected)",
as below.</div>
<div><br>
</div>
<div>We are
running
glusterfs
5.6.1. Thanks
in advance for
any
assistance!<br>
</div>
<div><br>
</div>
<div>On
existing node
gfs1, trying
to add new
arbiter node
gfs3:</div>
<div><br>
</div>
<div># gluster
volume
add-brick
gvol0 replica
3 arbiter 1
gfs3:/nodirectwritedata/gluster/gvol0<br>
volume
add-brick:
failed: Commit
failed on
gfs3. Please
check log file
for details.<br>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</blockquote>
<div><br>
</div>
<div>This looks like a
glusterd issue. Please
check the glusterd logs
for more info.</div>
<div>Adding the glusterd dev
to this thread. Sanju, can
you take a look?</div>
<div> </div>
<div>Regards,</div>
<div>Nithya</div>
<blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<div dir="ltr">
<div dir="ltr">
<div dir="ltr">
<div dir="ltr">
<div dir="ltr">
<div dir="ltr">
<div dir="ltr">
<div dir="ltr">
<div><br>
</div>
<div>On new
node gfs3 in
gvol0-add-brick-mount.log:</div>
<div><br>
</div>
<div>[2019-05-17
01:20:22.689721] I
[fuse-bridge.c:4267:fuse_init] 0-glusterfs-fuse: FUSE inited with
protocol
versions:
glusterfs 7.24
kernel 7.22<br>
[2019-05-17
01:20:22.689778]
I
[fuse-bridge.c:4878:fuse_graph_sync] 0-fuse: switched to graph 0<br>
[2019-05-17
01:20:22.694897]
E
[fuse-bridge.c:4336:fuse_first_lookup] 0-fuse: first lookup on root
failed
(Transport
endpoint is
not connected)<br>
[2019-05-17
01:20:22.699770]
W
[fuse-resolve.c:127:fuse_resolve_gfid_cbk] 0-fuse:
00000000-0000-0000-0000-000000000001:
failed to
resolve
(Transport
endpoint is
not connected)<br>
[2019-05-17
01:20:22.699834]
W
[fuse-bridge.c:3294:fuse_setxattr_resume] 0-glusterfs-fuse: 2: SETXATTR
00000000-0000-0000-0000-000000000001/1 (trusted.add-brick) resolution
failed<br>
[2019-05-17
01:20:22.715656]
I
[fuse-bridge.c:5144:fuse_thread_proc] 0-fuse: initating unmount of
/tmp/mntQAtu3f<br>
[2019-05-17
01:20:22.715865]
W
[glusterfsd.c:1500:cleanup_and_exit]
(-->/lib64/libpthread.so.0(+0x7dd5) [0x7fb223bf6dd5]
-->/usr/sbin/glusterfs(glusterfs_sigwaiter+0xe5)
[0x560886581e75]
-->/usr/sbin/glusterfs(cleanup_and_exit+0x6b) [0x560886581ceb] ) 0-:
received
signum (15),
shutting down<br>
[2019-05-17
01:20:22.715926]
I
[fuse-bridge.c:5914:fini] 0-fuse: Unmounting '/tmp/mntQAtu3f'.<br>
[2019-05-17
01:20:22.715953]
I
[fuse-bridge.c:5919:fini] 0-fuse: Closing fuse connection to
'/tmp/mntQAtu3f'.<br>
</div>
<div><br>
</div>
<div>Processes
running on new
node gfs3:</div>
<div><br>
</div>
<div># ps -ef
| grep gluster<br>
root
6832 1 0
20:17 ?
00:00:00
/usr/sbin/glusterd
-p
/var/run/glusterd.pid
--log-level
INFO<br>
root
15799 1 0
20:17 ?
00:00:00
/usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p
/var/run/gluster/glustershd/glustershd.pid -l
/var/log/glusterfs/glustershd.log
-S
/var/run/gluster/24c12b09f93eec8e.socket
--xlator-option
*replicate*.node-uuid=2069cfb3-c798-47e3-8cf8-3c584cf7c412
--process-name
glustershd<br>
root 16856
16735 0 21:21
pts/0
00:00:00 grep
--color=auto
gluster<br>
<br>
</div>
-- <br>
<div dir="ltr" class="gmail-m_-259860184991938039gmail-m_5909859710443882225gmail-m_7826856973782339880gmail-m_4703823630811393419gmail-m_750124261411719583gmail-m_4549560710938974858gmail-m_-5557460900297667963gmail_signature">
<div dir="ltr">
<div>
<div dir="ltr">
<div>
<div dir="ltr">
<div>
<div dir="ltr">
<div>
<div dir="ltr">
<div>David
Cunningham,
Voisonics
Limited<br>
<a href="http://voisonics.com/" target="_blank">http://voisonics.com/</a><br>
USA: +1 213
221 1092<br>
New Zealand:
+64 (0)28 2558
3782</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
_______________________________________________<br>
Gluster-users mailing list<br>
<a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a><br>
<a href="https://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">https://lists.gluster.org/mailman/listinfo/gluster-users</a></blockquote>
</div>
</div>
</blockquote>
</div>
<br clear="all">
<div><br>
</div>
-- <br>
<div dir="ltr" class="gmail-m_-259860184991938039gmail-m_5909859710443882225gmail-m_7826856973782339880gmail-m_4703823630811393419gmail-m_750124261411719583gmail_signature">
<div dir="ltr">
<div>Thanks,<br>
</div>
Sanju<br>
</div>
</div>
</blockquote>
</div>
<br clear="all">
<br>
-- <br>
<div dir="ltr" class="gmail-m_-259860184991938039gmail-m_5909859710443882225gmail-m_7826856973782339880gmail-m_4703823630811393419gmail_signature">
<div dir="ltr">
<div>
<div dir="ltr">
<div>
<div dir="ltr">
<div>
<div dir="ltr">
<div>
<div dir="ltr">
<div>David
Cunningham,
Voisonics Limited<br>
<a href="http://voisonics.com/" target="_blank">http://voisonics.com/</a><br>
USA: +1 213 221
1092<br>
New Zealand: +64
(0)28 2558 3782</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
<br>
<fieldset class="gmail-m_-259860184991938039gmail-m_5909859710443882225gmail-m_7826856973782339880gmail-m_4703823630811393419mimeAttachmentHeader"></fieldset>
<pre class="gmail-m_-259860184991938039gmail-m_5909859710443882225gmail-m_7826856973782339880gmail-m_4703823630811393419moz-quote-pre">_______________________________________________
Gluster-users mailing list
<a class="gmail-m_-259860184991938039gmail-m_5909859710443882225gmail-m_7826856973782339880gmail-m_4703823630811393419moz-txt-link-abbreviated" href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a>
<a class="gmail-m_-259860184991938039gmail-m_5909859710443882225gmail-m_7826856973782339880gmail-m_4703823630811393419moz-txt-link-freetext" href="https://lists.gluster.org/mailman/listinfo/gluster-users" target="_blank">https://lists.gluster.org/mailman/listinfo/gluster-users</a></pre>
</blockquote>
</div>
</blockquote>
</div>
<br clear="all">
<br>
-- <br>
<div dir="ltr" class="gmail-m_-259860184991938039gmail-m_5909859710443882225gmail-m_7826856973782339880gmail_signature">
<div dir="ltr">
<div>
<div dir="ltr">
<div>
<div dir="ltr">
<div>
<div dir="ltr">
<div>
<div dir="ltr">
<div>David Cunningham,
Voisonics Limited<br>
<a href="http://voisonics.com/" target="_blank">http://voisonics.com/</a><br>
USA: +1 213 221 1092<br>
New Zealand: +64 (0)28
2558 3782</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</blockquote>
</div>
</blockquote>
</div>
<br clear="all">
<br>
-- <br>
<div dir="ltr" class="gmail-m_-259860184991938039gmail-m_5909859710443882225gmail_signature">
<div dir="ltr">
<div>
<div dir="ltr">
<div>
<div dir="ltr">
<div>
<div dir="ltr">
<div>
<div dir="ltr">
<div>David Cunningham, Voisonics
Limited<br>
<a href="http://voisonics.com/" target="_blank">http://voisonics.com/</a><br>
USA: +1 213 221 1092<br>
New Zealand: +64 (0)28 2558 3782</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</blockquote>
</blockquote>
</div>
</blockquote>
</div>
<br clear="all">
<br>
-- <br>
<div dir="ltr" class="gmail-m_-259860184991938039gmail_signature">
<div dir="ltr">
<div>
<div dir="ltr">
<div>
<div dir="ltr">
<div>
<div dir="ltr">
<div>
<div dir="ltr">
<div>David Cunningham, Voisonics Limited<br>
<a href="http://voisonics.com/" target="_blank">http://voisonics.com/</a><br>
USA: +1 213 221 1092<br>
New Zealand: +64 (0)28 2558 3782</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</blockquote>
</div>
</blockquote></div><br clear="all"><br>-- <br><div dir="ltr" class="gmail_signature"><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div>David Cunningham, Voisonics Limited<br><a href="http://voisonics.com/" target="_blank">http://voisonics.com/</a><br>USA: +1 213 221 1092<br>New Zealand: +64 (0)28 2558 3782</div></div></div></div></div></div></div></div></div></div></div>