[Gluster-users] 答复: brick become to offline

何亦军 heyijun at greatwall.com.cn
Mon Jun 15 07:16:55 UTC 2015


Hi 'M S Vishwanath Bhat'

Thanks ,

I have two node server  done update and downgrade,    I just do it by yum update to 3.7.1  and yum downgrade to 3.6.2  in node server  one by one.   Don’t change any  config by manual.
After these things,  one server work well , one server get that problem.    (gwgfs03 is a new installation server try to fix crash server)

BTW, my downgrade cmd:

yum downgrade glusterfs-server-3.6.2-1.el7.x86_64 glusterfs-libs-3.6.2-1.el7.x86_64  glusterfs-cli-3.6.2-1.el7.x86_64  glusterfs-3.6.2-1.el7.x86_64 glusterfs-fuse-3.6.2-1.el7.x86_64 glusterfs-3.6.2-1.el7.x86_64 glusterfs-api-3.6.2-1.el7.x86_64 glusterfs-rdma-3.6.2-1.el7.x86_64


Best Regard.

发件人: M S Vishwanath Bhat [mailto:msvbhat at gmail.com]
发送时间: 2015年6月15日 15:02
收件人: 何亦军
抄送: gluster-users at gluster.org<mailto:gluster-users at gluster.org>
主题: Re: [Gluster-users] brick become to offline



On 15 June 2015 at 07:38, 何亦军 <heyijun at greatwall.com.cn<mailto:heyijun at greatwall.com.cn>> wrote:
Hi,

         I have done gluster upgrade to 3.7.1, then after downgrade back 3.6.2.    I don’t know something wrong, the server brick become to offline.  How to fix that problem?  Thanks so much.

    [root at gwgfs01 ~]# gluster volume status
Status of volume: vol01
Gluster process                                         Port    Online  Pid
------------------------------------------------------------------------------
Brick gwgfs01:/data/brick1/vol01                        N/A     N       N/A
Brick gwgfs03:/data/brick2/vol01                        49152   Y       17566
Brick gwgfs01:/data/brick2/vol01                        N/A     N       N/A
Brick gwgfs02:/data/brick2/vol01                        49152   Y       4109
Brick gwgfs02:/data/brick1/vol01                        49153   Y       4121
Brick gwgfs03:/data/brick1/vol01                        49153   Y       17623
Self-heal Daemon on localhost                           N/A     Y       12720
Quota Daemon on localhost                               N/A     Y       12727
Self-heal Daemon on gwgfs02                             N/A     Y       4412
Quota Daemon on gwgfs02                                 N/A     Y       4422
Self-heal Daemon on gwgfs03                             N/A     Y       17642
Quota Daemon on gwgfs03                                 N/A     Y       17652

Task Status of Volume vol01
------------------------------------------------------------------------------
Task                 : Rebalance
ID                   : 0bb30902-e7d3-4b2f-9c83-b708ebbad592
Status               : failed


some log in data-brick1-vol01.log :

[2015-06-15 02:06:54.757968] I [MSGID: 100030] [glusterfsd.c:2018:main] 0-/usr/sbin/glusterfsd: Started running /usr/sbin/glusterfsd version 3.6.2 (args: /usr/sbin/glusterfsd -s gwgfs01 --volfile-id vol01.gwgfs01.data-brick1-vol01 -p /var/lib/glusterd/vols/vol01/run/gwgfs01-data-brick1-vol01.pid -S /var/run/ecf2e5c591c01357cf33cbaf3b700bc6.socket --brick-name /data/brick1/vol01 -l /var/log/glusterfs/bricks/data-brick1-vol01.log --xlator-option *-posix.glusterd-uuid=b80f71d0-6944-4236-af96-e272a1f7e739 --brick-port 49152 --xlator-option vol01-server.listen-port=49152)
[2015-06-15 02:06:56.808733] W [xlator.c:191:xlator_dynload] 0-xlator: /usr/lib64/glusterfs/3.6.2/xlator/features/trash.so: cannot open shared object file: No such file or directory
[2015-06-15 02:06:56.808793] E [graph.y:212:volume_type] 0-parser: Volume 'vol01-trash', line 9: type 'features/trash' is not valid or not found on this machine
[2015-06-15 02:06:56.808896] E [graph.y:321:volume_end] 0-parser: "type" not specified for volume vol01-trash
[2015-06-15 02:06:56.809044] E [MSGID: 100026] [glusterfsd.c:1892:glusterfs_process_volfp] 0-: failed to construct the graph
[2015-06-15 02:06:56.809369] W [glusterfsd.c:1194:cleanup_and_exit] (--> 0-: received signum (0), shutting down

I'm not an expert here in analysing the logs, But looks like trash xlator (which is present only in 3.7.x) is causing problems. Because the feature is unavailable in 3.6.x versions. It's picking up the wrong volfile maybe?
Also did you downgrade the whole setup or just the part of the cluster?
Cheers,
Vishwanath


_______________________________________________
Gluster-users mailing list
Gluster-users at gluster.org<mailto:Gluster-users at gluster.org>
http://www.gluster.org/mailman/listinfo/gluster-users

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20150615/7f1a5357/attachment.html>


More information about the Gluster-users mailing list