[Gluster-users] how to recover a accidentally delete brick directory?
weiyuanke
weiyuanke123 at gmail.com
Thu Nov 28 10:13:03 UTC 2013
thanks Shwetha.
you solved my problem!
---------------------------
韦远科
010 5881 3749
中国科学院 计算机网络信息中心
On 2013年11月28日, at 下午5:47, shwetha <spandura at redhat.com> wrote:
> Since the "trusted.glusterfs.volume-id" is not set on the bricks , the volume start will fail.
>
> 1) Execute : "getfattr -e hex -n trusted.glusterfs.volume-id /opt/gluster_data/eccp_glance" from any of the node which the brick process running.
>
> You will get the hex value of the "trusted.glusterfs.volume-id" extended attribute . Set this value on the bricks created (On the nodes where you deleted the bricks)
>
> 2) setfattr -n trusted.glusterfs.volume-id -v <hex_value_of_the_trusted_glusterfs_volume-id> /opt/gluster_data/eccp_glance
>
> 3) gluster volume start <volume_name> force : To restart the brick process
>
> 4) gluster volume status <volume_name> : Check all the brick process are started.
>
> 5) gluster volume heal <volume_name> full : To trigger self-heal on to the removed bricks.
>
> Shwetha
> On 11/28/2013 02:59 PM, weiyuanke wrote:
>> hi Shwetha,
>>
>>
>> command "gluster volume start eccp_glance force" on the other node gives following:
>>
>>
>> with cli.log
>>
>>
>> on the damaged node,gluster volume start eccp_glance force gives:
>>
>>
>>
>>
>>
>> ---------------------------
>> 韦远科
>> 010 5881 3749
>> 中国科学院 计算机网络信息中心
>>
>>
>>
>>
>>
>> On 2013年11月28日, at 下午4:50, shwetha <spandura at redhat.com> wrote:
>>
>>> 1) create the brick directory "/opt/gluster_data/eccp_glance" on the nodes where you deleted the directories.
>>>
>>> 2) From any of the storage node execute :
>>> gluster volume start <volume_name> force : To restart the brick process
>>> gluster volume status <volume_name> : Check all the brick process are started.
>>> gluster volume heal <volume_name> full : To trigger self-heal on to the removed bricks.
>>> -Shwetha
>>>
>>> On 11/28/2013 02:09 PM, 韦远科 wrote:
>>>> hi all,
>>>>
>>>> I accidentally removed the brick directory of a volume on one node, the replica for this volume is 2.
>>>>
>>>> now the situation is , there is no corresponding glusterfsd process on this node, and 'glusterfs volume status' shows that the brick is offline, like this:
>>>> Brick 192.168.64.11:/opt/gluster_data/eccp_glance N/A Y 2513
>>>> Brick 192.168.64.12:/opt/gluster_data/eccp_glance 49161 Y 2542
>>>> Brick 192.168.64.17:/opt/gluster_data/eccp_glance 49164 Y 2537
>>>> Brick 192.168.64.18:/opt/gluster_data/eccp_glance 49154 Y 4978
>>>> Brick 192.168.64.29:/opt/gluster_data/eccp_glance N/A N N/A
>>>> Brick 192.168.64.30:/opt/gluster_data/eccp_glance 49154 Y 4072
>>>> Brick 192.168.64.25:/opt/gluster_data/eccp_glance 49155 Y 11975
>>>> Brick 192.168.64.26:/opt/gluster_data/eccp_glance 49155 Y 17947
>>>> Brick 192.168.64.13:/opt/gluster_data/eccp_glance 49154 Y 26045
>>>> Brick 192.168.64.14:/opt/gluster_data/eccp_glance 49154 Y 22143
>>>>
>>>>
>>>> so are there ways to bring this brick back to normal?
>>>>
>>>> thanks!
>>>>
>>>>
>>>> -----------------------------------------------------------------
>>>> 韦远科
>>>> 中国科学院 计算机网络信息中心
>>>>
>>>>
>>>>
>>>> _______________________________________________
>>>> Gluster-users mailing list
>>>> Gluster-users at gluster.org
>>>> http://supercolony.gluster.org/mailman/listinfo/gluster-users
>>>
>>>
>>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20131128/a9b21bb4/attachment.html>
More information about the Gluster-users
mailing list