[Gluster-users] Broken gluster ... Help required

HL freemail.grharry at gmail.com
Thu Jul 4 16:55:33 UTC 2013


On 04/07/2013 07:39 μμ, Vijay Bellur wrote:
> On 07/04/2013 07:36 PM, HL wrote:
>> Hello list,
>>
>> I have a 2 node replica  clusterfs
>>
>> Volume Name: GLVol
>> Type: Replicate
>> Volume ID: 2106bc3d-d184-42db-ab6e-e547ff816113
>> Status: Started
>> Number of Bricks: 1 x 2 = 2
>> Transport-type: tcp
>> Bricks:
>> Brick1: node01:/export/fm_glust
>> Brick2: node02:/export/fe_glust
>> Options Reconfigured:
>> auth.allow: 172.16.35.*
>> geo-replication.indexing: on
>>
>>
>> After a cable faillure I Have an unstable system,
>>
>> gluster volume heal GLVol info split-brain
>>
>> Gives about 1000 of these entries ..
>>
>> 2013-07-04 16:46:10 <gfid:ce073ea9-a95a-4c5e-b2bd-7db1e26cbad7>
>> 2013-07-04 16:46:10 <gfid:0d7ff6b2-5ed1-4584-b0e3-9f0c723463b8>
>> 2013-07-04 16:46:10 /vz_data/mySQLDBS/var/lib/mysql/ib_logfile0
>>
>> I've found a script in the users list on how to deal with
>> /vz_data/mySQLDBS/var/lib/mysql/ib_logfile0
>> that is known path files
>>
>> but don't know how to deal with the hex entries ...  they seem to me as
>> orphans ...
>
> The hex entries are "gfid"s used by GlusterFS for identifying files 
> and are not orphan objects. A gfid that cannot be resolved to a path 
> by the server immediately, would appear with the hex representation.
>
>>
>> Is ok To delete them ???
>
> Unless the files are in split brain, it is not a good idea to delete 
> them. Letting the self-heal daemon heal these files would be a better 
> approach.
>

Thank's Vijay

I am not sure I can follow
as I stated
the output of
gluster volume heal GLVol info split-brain

was

2013-07-04 16:46:10 <gfid:ce073ea9-a95a-4c5e-b2bd-7db1e26cbad7>
2013-07-04 16:46:10 <gfid:0d7ff6b2-5ed1-4584-b0e3-9f0c723463b8>
2013-07-04 16:46:10 /vz_data/mySQLDBS/var/lib/mysql/ib_logfile0
...................................................................................................
2013-07-04 10:27:09 <gfid:dedeae59-7fa3-4605-ae08-4f0ab8ebecc5>
2013-07-04 10:27:09 <gfid:5c607c90-e2c9-4326-8233-a41cfdd280a9>
2013-07-04 10:27:09 <gfid:dd554366-bc7b-4503-a99d-0da47d480d16>
2013-07-04 10:27:09 <gfid:f4ef19e7-3534-4cdf-a848-6ad901a6d98d>
2013-07-04 10:27:09 <gfid:92e87f93-61ae-4f29-8c12-5041b4260904>
2013-07-04 10:27:09 <gfid:15ea3b21-f506-4310-bce5-ea34694cb56b>
2013-07-04 10:27:09 <gfid:3d689204-554a-4126-942f-26e55bd0539d>
2013-07-04 10:27:09 <gfid:fccb20fa-bd1b-4465-a792-99eecfdd0163>
2013-07-04 10:27:09 <gfid:d7957dc8-562e-4f77-bd7f-5d077754b53a>
2013-07-04 10:26:13 <gfid:345c412e-0c49-48a1-bf8a-f7af28556aa7>

So it seems to me that they are in split-brain

Other than that ..
How can I make sure that  the self-heal daemon
is running and is healing indeed the files

gluster  volume status all
shows

Status of volume: GLVol
Gluster process                        Port    Online    Pid
------------------------------------------------------------------------------
Brick node01:/export/fm_glust                24009    Y    2614
Brick node02:/export/fe_glust                24009    Y    2746
NFS Server on localhost                    38467    Y    2752
Self-heal Daemon on localhost                N/A    Y    2758
NFS Server on node01                    38467    Y    2626
Self-heal Daemon on node01                N/A    Y    2632


So
Delete them or not???

If self healing is doing any progress
My bricks are about 500G in size containg about 400K file entries
the two bricks are connected with a dedicated 1G nic
how long will it take to heal ???


Thank's again
Regards.




More information about the Gluster-users mailing list