[Bugs] [Bug 1180015] reboot node with some glusterd glusterfsd glusterfs services.

bugzilla at redhat.com bugzilla at redhat.com
Thu Jan 15 04:36:50 UTC 2015


https://bugzilla.redhat.com/show_bug.cgi?id=1180015

zhangyongsheng <helloparadise at 163.com> changed:

           What    |Removed                     |Added
----------------------------------------------------------------------------
              Flags|needinfo?(helloparadise at 163 |
                   |.com)                       |



--- Comment #4 from zhangyongsheng <helloparadise at 163.com> ---
yesterday, i make a auto reboot(reboot per 30 seconds) shell script,and  use
glusterfs-3.6.1.tar.gz source code downed from www.gluster.org building it,
above some troubles still exist now and then.service glusterd start and
"gluster volume start vol_name" command can be shell script auto execut,after
node start up
 i reboot node2, node1 have these toubles.

log info:

[2015-01-14 23:02:11.548016] E [graph.c:525:glusterfs_graph_activate] 0-graph:
init failed
[2015-01-15 00:02:28.372193] I [graph.c:269:gf_add_cmdline_options]
0-test-server: adding option 'listen
-port' for volume 'test-server' with value '49153'
[2015-01-15 00:02:28.372233] I [graph.c:269:gf_add_cmdline_options]
0-test-posix: adding option 'glusterd-uuid' for volume 'test-posix' with value
'ef2abf61-6d0e-4edb-af17-41fe991e6419'
[2015-01-15 00:02:28.377067] I [rpcsvc.c:2142:rpcsvc_set_outstanding_rpc_limit]
0-rpc-service: Configure
d rpc.outstanding-rpc-limit with value 64
[2015-01-15 00:02:28.377138] W [options.c:898:xl_opt_validate] 0-test-server:
option 'listen-port' is de
precated, preferred is 'transport.socket.listen-port', continuing with
correction
[2015-01-15 00:02:28.377457] W [socket.c:3599:reconfigure] 0-test-quota: NBIO
on -1 failed (Bad file des
criptor)
[2015-01-15 00:02:28.400184] E [posix.c:5604:init] 0-test-posix: Extended
attribute trusted.glusterfs.
volume-id is absent
[2015-01-15 00:02:28.400267] E [xlator.c:425:xlator_init] 0-test-posix:
Initialization of volume 'test-p
osix' failed, review your volfile again
[2015-01-15 00:02:28.400285] E [graph.c:322:glusterfs_graph_init] 0-test-posix:
initializing translato
r failed
[2015-01-15 00:02:28.400295] E [graph.c:525:glusterfs_graph_activate] 0-graph:
init failed
[2015-01-15 00:02:28.400687] W [glusterfsd.c:1194:cleanup_and_exit] (--> 0-:
received signum (0), shut
ting down
[2015-01-15 00:02:28.606915] I [MSGID: 100030] [glusterfsd.c:2018:main]
0-/usr/sbin/glusterfsd: Star
ted running /usr/sbin/glusterfsd version 3.6.1 (args: /usr/sbin/glusterfsd -s
node-1.aaa.bbb.ccc --v
olfile-id
test.node-1.aaa.bbb.ccc.glusterfs-wwn-0x6000c29141020d82685aaf79ffd0a888 -p
/var/lib/digioce
and/vols/test/run/node-1.aaa.bbb.ccc-glusterfs-wwn-0x6000c29141020d82685aaf79ffd0a888.pid
-S /var/run/
85bd37fb24cafe6902159834b173c220.socket --brick-name
/glusterfs/wwn-0x6000c29141020d82685aaf79ffd0a888
 -l
/var/log/glusterfs/bricks/glusterfs-wwn-0x6000c29141020d82685aaf79ffd0a888.log
--xlator-option *
-posix.glusterd-uuid=ef2abf61-6d0e-4edb-af17-41fe991e6419 --brick-port 49153
--xlator-option test-serv
er.listen-port=49153)
=============================================

[root at node-1 ~]# gluster volume info

Volume Name: test
Type: Disperse
Volume ID: e67489f2-8019-4e8e-927d-aa103f8d4502
Status: Started
Number of Bricks: 1 x (2 + 1) = 3
Transport-type: tcp
Bricks:
Brick1: node-3.aaa.bbb.ccc:/glusterfs/wwn-0x6000c299dcd6abf74489faac4a2c0afe
Brick2: node-1.aaa.bbb.ccc:/glusterfs/wwn-0x6000c29141020d82685aaf79ffd0a888
Brick3: node-2.aaa.bbb.ccc:/glusterfs/wwn-0x6000c29a2efdcab8ebe89f9298246f79
Options Reconfigured:
features.quota: on
performance.high-prio-threads: 64
performance.low-prio-threads: 64
performance.least-prio-threads: 64
performance.normal-prio-threads: 64
performance.io-thread-count: 64
server.allow-insecure: on
features.lock-heal: on
network.ping-timeout: 5
performance.client-io-threads: enable
================================================
file copy
/etc/fstab   


# /etc/fstab
# Created by anaconda on Fri Dec 26 10:06:51 2014
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
/dev/mapper/VolGroup00-lv_root /                       ext4    defaults       
1 1
UUID=ed36c3b0-b5a6-4b06-bd5a-d1d116251262 /boot                   ext4   
defaults        1 2
UUID=9b7e072d-4222-45c6-84c4-f8001d537224 /home                   ext4   
defaults        1 2
/dev/mapper/VolGroup00-lv_swap swap                    swap    defaults       
0 0
UUID=26a4d133-57a4-4408-a438-978a5cfba248 swap                    swap   
defaults        0 0
tmpfs                   /dev/shm                tmpfs   defaults        0 0
devpts                  /dev/pts                devpts  gid=5,mode=620  0 0
sysfs                   /sys                    sysfs   defaults        0 0
proc                    /proc                   proc    defaults        0 0
/dev/disk/by-id/wwn-0x6000c29141020d82685aaf79ffd0a888
/glusterfs/wwn-0x6000c29141020d82685aaf79ffd0a8
88 xfs defaults 0 0
==================================================================
df -hT
[root at node-1 ~]# df -hT
Filesystem    Type    Size  Used Avail Use% Mounted on
/dev/mapper/VolGroup00-lv_root
              ext4    9.0G  1.4G  7.2G  17% /
tmpfs        tmpfs    2.0G     0  2.0G   0% /dev/shm
/dev/sda1     ext4     97M   42M   51M  46% /boot
/dev/sda2     ext4    4.9G  140M  4.5G   3% /home
/dev/sdb       xfs   1014M   33M  982M   4%
/glusterfs/wwn-0x6000c29141020d82685aaf79ffd0a888
====================================================
[root at node-1 ~]#getfattr -d -m ".*" -e hex
/glusterfs/wwn-0x6000c29141020d82685aaf79ffd0a888
[root at node-1 ~]# 
[root at node-1 ~]# 
[root at node-1 ~]# 
[root at node-1 ~]# 
[root at node-1 ~]# 
[root at node-1 ~]# 
[root at node-1 ~]# 
[root at node-1 ~]#
                    no any information


=================================================
   you need any issue information,i shall provide you as soon as possible. 
   thanks your reply

-- 
You are receiving this mail because:
You are on the CC list for the bug.
Unsubscribe from this bug https://bugzilla.redhat.com/token.cgi?t=joO3yrsQ6g&a=cc_unsubscribe


More information about the Bugs mailing list