<html>
<head>
<meta content="text/html; charset=utf-8" http-equiv="Content-Type">
</head>
<body bgcolor="#FFFFFF" text="#000000">
<br>
<br>
<div class="moz-cite-prefix">On 07/02/17 12:50, Nag Pavan Chilakam
wrote:<br>
</div>
<blockquote
cite="mid:122323847.21858642.1486471814454.JavaMail.zimbra@redhat.com"
type="cite">
<pre wrap="">Hi,
Can you help us with more information on the volume, like volume status and volume info
One reason of "transport endpoint error" is the brick could be down
Also, i see that the syntax used for healing is wrong.
You need to use as below:
gluster v heal <vname> split-brain source-brick <brick path> <filename considering brick path as />
In yourcase if brick path is "/G-store/1" and the file to be healed is "that_file" , then use below syntax (in this case i am considering "that_file" lying under the brick path directly"
gluster volume heal USER-HOME split-brain source-brick 10.5.6.100:/G-store/1 /that_file
</pre>
</blockquote>
<br>
that was that, my copy-paste typo, it does not heal. Interestingly,
that file is not reported by heal.<br>
<br>
I've replied to - GFID Mismatch - Automatic Correction ? - I think
my problem is similar, here is a file the heal actually sees:<br>
<br>
<br>
$ gluster vol heal USER-HOME info<br>
Brick
10.5.6.100:/__.aLocalStorages/3/0-GLUSTERs/0-USER.HOME/aUser/.vim.backup/.bash_profile.swp
<br>
Status: Connected<br>
Number of entries: 1<br>
<br>
Brick
10.5.6.49:/__.aLocalStorages/3/0-GLUSTERs/0-USER.HOME/aUser/.vim.backup/.bash_profile.swp
<br>
Status: Connected<br>
Number of entries: 1<br>
<br>
I'm copying+pasting what I said in that reply to that thread:<br>
...<br>
<br>
yep, I'm seeing the same:<br>
as follows:<br>
3]$ getfattr -d -m . -e hex .<br>
# file: .<br>
security.selinux=0x73797374656d5f753a6f626a6563745f723a756e6c6162656c65645f743a733000<br>
trusted.afr.USER-HOME-client-2=0x000000000000000000000000<br>
trusted.afr.USER-HOME-client-3=0x000000000000000000000000<br>
trusted.afr.USER-HOME-client-5=0x000000000000000000000000<br>
trusted.afr.dirty=0x000000000000000000000000<br>
trusted.gfid=0x06341b521ba94ab7938eca57f7a1824f<br>
trusted.glusterfs.9e4ed9b7-373a-413b-bc82-b6f978e82ec4.xtime=0x5898e0cf000dd2fe<br>
trusted.glusterfs.dht=0x000000010000000000000000ffffffff<br>
trusted.glusterfs.quota.00000000-0000-0000-0000-000000000001.contri.1=0x00701c90fcb11200fffffef6f08c798e0000006a99819205<br>
trusted.glusterfs.quota.dirty=0x3000<br>
trusted.glusterfs.quota.size.1=0x00701c90fcb11200fffffef6f08c798e0000006a99819205<br>
3]$ getfattr -d -m . -e hex .vim.backup<br>
# file: .vim.backup<br>
security.selinux=0x73797374656d5f753a6f626a6563745f723a756e6c6162656c65645f743a733000<br>
trusted.afr.USER-HOME-client-3=0x000000000000000000000000<br>
trusted.gfid=0x0b3a223955534de89086679a4dce8156<br>
trusted.glusterfs.9e4ed9b7-373a-413b-bc82-b6f978e82ec4.xtime=0x5898621c0005d720<br>
trusted.glusterfs.dht=0x000000010000000000000000ffffffff<br>
trusted.glusterfs.quota.06341b52-1ba9-4ab7-938e-ca57f7a1824f.contri.1=0x000000000000040000000000000000020000000000000001<br>
trusted.glusterfs.quota.dirty=0x3000<br>
trusted.glusterfs.quota.size.1=0x000000000000040000000000000000020000000000000001<br>
3]$ getfattr -d -m . -e hex .vim.backup/.bash_profile.swp<br>
# file: .vim.backup/.bash_profile.swp<br>
security.selinux=0x73797374656d5f753a6f626a6563745f723a756e6c6162656c65645f743a733000<br>
trusted.afr.USER-HOME-client-0=0x000000010000000100000000<br>
trusted.afr.USER-HOME-client-5=0x000000010000000100000000<br>
trusted.gfid=0xc2693670fc6d4fed953f21dcb77a02cf<br>
trusted.glusterfs.9e4ed9b7-373a-413b-bc82-b6f978e82ec4.xtime=0x5896043c000baa55<br>
trusted.glusterfs.quota.0b3a2239-5553-4de8-9086-679a4dce8156.contri.1=0x00000000000000000000000000000001<br>
trusted.pgfid.0b3a2239-5553-4de8-9086-679a4dce8156=0x00000001<br>
<br>
2]$ getfattr -d -m . -e hex .<br>
# file: .<br>
security.selinux=0x73797374656d5f753a6f626a6563745f723a64656661756c745f743a733000<br>
trusted.afr.USER-HOME-client-1=0x000000000000000000000000<br>
trusted.afr.USER-HOME-client-2=0x000000000000000000000000<br>
trusted.afr.USER-HOME-client-3=0x000000000000000000000000<br>
trusted.afr.USER-HOME-client-5=0x000000000000000000000000<br>
trusted.afr.dirty=0x000000000000000000000000<br>
trusted.gfid=0x06341b521ba94ab7938eca57f7a1824f<br>
trusted.glusterfs.9e4ed9b7-373a-413b-bc82-b6f978e82ec4.xtime=0x5898e0d000016f82<br>
trusted.glusterfs.dht=0x000000010000000000000000ffffffff<br>
trusted.glusterfs.quota.00000000-0000-0000-0000-000000000001.contri.1=0xa5e66200a7a45000cb96fbf7d6336229fae7152d8851097b<br>
trusted.glusterfs.quota.dirty=0x3000<br>
trusted.glusterfs.quota.size.1=0xa5e66200a7a45000cb96fbf7d6336229fae7152d8851097b<br>
2]$ getfattr -d -m . -e hex .vim.backup<br>
# file: .vim.backup<br>
security.selinux=0x73797374656d5f753a6f626a6563745f723a64656661756c745f743a733000<br>
trusted.afr.USER-HOME-client-3=0x000000000000000000000000<br>
trusted.gfid=0x0b3a223955534de89086679a4dce8156<br>
trusted.glusterfs.9e4ed9b7-373a-413b-bc82-b6f978e82ec4.xtime=0x5898621b000855fe<br>
trusted.glusterfs.dht=0x000000010000000000000000ffffffff<br>
trusted.glusterfs.quota.06341b52-1ba9-4ab7-938e-ca57f7a1824f.contri.1=0x000000000000040000000000000000020000000000000001<br>
trusted.glusterfs.quota.dirty=0x3000<br>
trusted.glusterfs.quota.size.1=0x000000000000040000000000000000020000000000000001<br>
2]$ getfattr -d -m . -e hex .vim.backup/.bash_profile.swp<br>
# file: .vim.backup/.bash_profile.swp<br>
security.selinux=0x73797374656d5f753a6f626a6563745f723a64656661756c745f743a733000<br>
trusted.afr.USER-HOME-client-5=0x000000010000000100000000<br>
trusted.afr.USER-HOME-client-6=0x000000010000000100000000<br>
trusted.gfid=0x8a5b6e4ad18a49d0bae920c9cf8673a5<br>
trusted.glusterfs.9e4ed9b7-373a-413b-bc82-b6f978e82ec4.xtime=0x5896041400058191<br>
trusted.glusterfs.quota.0b3a2239-5553-4de8-9086-679a4dce8156.contri.1=0x00000000000000000000000000000001<br>
trusted.pgfid.0b3a2239-5553-4de8-9086-679a4dce8156=0x00000001<br>
<br>
<br>
and the log bit:<br>
<br>
GFID mismatch for
<gfid:335bf026-68bd-4bf4-9cba-63b65b12c0b1>/abbreviations.xlsx
6e9a7fa1-bfbe-4a59-ad06-a78ee1625649 on USER-HOME-client-6 and
773b7ea3-31cf-4b24-94f0-0b61b573b082 on USER-HOME-client-0<br>
<br>
most importantly, is there a workaround for the problem, as of now?
Before the bug, it it's such, gets fixed.<br>
b.w.<br>
L. <br>
<br>
-- end of paste<br>
<br>
but I have a few more files which also report I/O errors and heal
does NOT even mention them:<br>
on the brick that is a "master"(samba was sharing to the users)<br>
<br>
# file: abbreviations.log<br>
security.selinux=0x73797374656d5f753a6f626a6563745f723a64656661756c745f743a733000<br>
trusted.afr.dirty=0x000000000000000000000000<br>
trusted.bit-rot.version=0x0200000000000000589081fd00060376<br>
trusted.gfid=0x773b7ea331cf4b2494f00b61b573b082<br>
trusted.glusterfs.quota.335bf026-68bd-4bf4-9cba-63b65b12c0b1.contri.1=0x0000000000002a000000000000000001<br>
trusted.pgfid.335bf026-68bd-4bf4-9cba-63b65b12c0b1=0x00000001<br>
<br>
on the "slave" brick, was not serving files (certainly not that
file) to any users:<br>
<br>
# file: bbreviations.log<br>
security.selinux=0x73797374656d5f753a6f626a6563745f723a756e6c6162656c65645f743a733000<br>
trusted.afr.dirty=0x000000000000000000000000<br>
trusted.bit-rot.version=0x0200000000000000588c958a000b67ea<br>
trusted.gfid=0x6e9a7fa1bfbe4a59ad06a78ee1625649<br>
trusted.glusterfs.quota.335bf026-68bd-4bf4-9cba-63b65b12c0b1.contri.1=0x0000000000002a000000000000000001<br>
trusted.pgfid.335bf026-68bd-4bf4-9cba-63b65b12c0b1=0x00000001<br>
<br>
Question that probably was answered many times: is it OK to tamper
with(remove in my case) files directly from bricks?<br>
many thanks,<br>
L.<br>
<br>
<br>
<blockquote
cite="mid:122323847.21858642.1486471814454.JavaMail.zimbra@redhat.com"
type="cite">
<pre wrap="">
regards,
nag pavan
----- Original Message -----
From: "lejeczek" <a class="moz-txt-link-rfc2396E" href="mailto:peljasz@yahoo.co.uk"><peljasz@yahoo.co.uk></a>
To: <a class="moz-txt-link-abbreviated" href="mailto:gluster-users@gluster.org">gluster-users@gluster.org</a>
Sent: Tuesday, 7 February, 2017 2:00:51 AM
Subject: [Gluster-users] Input/output error - would not heal
hi all
I'm hitting such problem:
$ gluster vol heal USER-HOME split-brain source-brick
10.5.6.100:/G-store/1
Healing gfid:8a5b6e4a-d18a-49d0-bae9-20c9cf8673a5
failed:Transport endpoint is not connected.
Status: Connected
Number of healed entries: 0
$ gluster vol heal USER-HOME split-brain source-brick
10.5.6.100:/G-store/1/that_file
Lookup failed on /that_<a class="moz-txt-link-freetext" href="file:Input/output">file:Input/output</a> error
Volume heal failed.
v3.9. it's a two-brick volume, was three but removed one I
think a few hours before the problem was first noticed.
what to do now?
many thanks,
L
_______________________________________________
Gluster-users mailing list
<a class="moz-txt-link-abbreviated" href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a>
<a class="moz-txt-link-freetext" href="http://lists.gluster.org/mailman/listinfo/gluster-users">http://lists.gluster.org/mailman/listinfo/gluster-users</a>
</pre>
</blockquote>
<br>
</body>
</html>