[Gluster-users] 答复: brick become to offline

Atin Mukherjee amukherj at redhat.com
Wed Jun 17 10:34:55 UTC 2015


Downgrade is not recommended and this is not a bug.

On 06/17/2015 02:28 PM, 何亦军 wrote:
> Yes, You are right.
> So, Anything I can do?  My glusterfs pool stuck in that problem.
> 
> 3.7.2 can be fix my issue?
> 
> -----邮件原件-----
> 发件人: Atin Mukherjee [mailto:amukherj at redhat.com] 
> 发送时间: 2015年6月17日 16:43
> 收件人: 何亦军; gluster-users at gluster.org; Anoop Chirayath Manjiyil Sajan
> 主题: Re: [Gluster-users] brick become to offline
> 
> 
> 
> On 06/17/2015 01:53 PM, 何亦军 wrote:
>> Hi Guys,
>>
>>          I have done gluster upgrade to 3.7.1, then after downgrade back 3.6.2.    I don’t know something wrong, the server brick become to offline.  How to fix that problem?  Thanks so much.
>>
>>     [root at gwgfs01 ~]# gluster volume status Status of volume: vol01
>> Gluster process                                         Port    Online  Pid
>> ------------------------------------------------------------------------------
>> Brick gwgfs01:/data/brick1/vol01                        N/A     N       N/A
>> Brick gwgfs03:/data/brick2/vol01                        49152   Y       17566
>> Brick gwgfs01:/data/brick2/vol01                        N/A     N       N/A
>> Brick gwgfs02:/data/brick2/vol01                        49152   Y       4109
>> Brick gwgfs02:/data/brick1/vol01                        49153   Y       4121
>> Brick gwgfs03:/data/brick1/vol01                        49153   Y       17623
>> Self-heal Daemon on localhost                           N/A     Y       12720
>> Quota Daemon on localhost                               N/A     Y       12727
>> Self-heal Daemon on gwgfs02                             N/A     Y       4412
>> Quota Daemon on gwgfs02                                 N/A     Y       4422
>> Self-heal Daemon on gwgfs03                             N/A     Y       17642
>> Quota Daemon on gwgfs03                                 N/A     Y       17652
>>
>> Task Status of Volume vol01
>> ------------------------------------------------------------------------------
>> Task                 : Rebalance
>> ID                   : 0bb30902-e7d3-4b2f-9c83-b708ebbad592
>> Status               : failed
>>
>>
>> some log in data-brick1-vol01.log :
>>
>> [2015-06-15 02:06:54.757968] I [MSGID: 100030] 
>> [glusterfsd.c:2018:main] 0-/usr/sbin/glusterfsd: Started running 
>> /usr/sbin/glusterfsd version 3.6.2 (args: /usr/sbin/glusterfsd -s 
>> gwgfs01 --volfile-id vol01.gwgfs01.data-brick1-vol01 -p 
>> /var/lib/glusterd/vols/vol01/run/gwgfs01-data-brick1-vol01.pid -S 
>> /var/run/ecf2e5c591c01357cf33cbaf3b700bc6.socket --brick-name 
>> /data/brick1/vol01 -l /var/log/glusterfs/bricks/data-brick1-vol01.log 
>> --xlator-option 
>> *-posix.glusterd-uuid=b80f71d0-6944-4236-af96-e272a1f7e739 
>> --brick-port 49152 --xlator-option vol01-server.listen-port=49152)
>> *[2015-06-15 02:06:56.808733] W [xlator.c:191:xlator_dynload] 
>> 0-xlator: /usr/lib64/glusterfs/3.6.2/xlator/features/trash.so: cannot 
>> open shared object file: No such file or directory*
> 3.6.2 has no trash translator in its server stack which 3.7 has. Hence when you downgraded its expecting the shared object to be present.
> 
> Anoop, please correct me if I am wrong.
> 
> ~Atin
>> [2015-06-15 02:06:56.808793] E [graph.y:212:volume_type] 0-parser: 
>> Volume 'vol01-trash', line 9: type 'features/trash' is not valid or 
>> not found on this machine
>> [2015-06-15 02:06:56.808896] E [graph.y:321:volume_end] 0-parser: 
>> "type" not specified for volume vol01-trash
>> [2015-06-15 02:06:56.809044] E [MSGID: 100026] 
>> [glusterfsd.c:1892:glusterfs_process_volfp] 0-: failed to construct 
>> the graph
>> [2015-06-15 02:06:56.809369] W [glusterfsd.c:1194:cleanup_and_exit] 
>> (--> 0-: received signum (0), shutting down
> 
>>
>>
>>
>> _______________________________________________
>> Gluster-users mailing list
>> Gluster-users at gluster.org
>> http://www.gluster.org/mailman/listinfo/gluster-users
>>
> 
> --
> ~Atin
> 

-- 
~Atin


More information about the Gluster-users mailing list