[Bugs] [Bug 1366813] New: Second gluster volume is offline after daemon restart or server reboot
bugzilla at redhat.com
bugzilla at redhat.com
Sat Aug 13 00:51:50 UTC 2016
https://bugzilla.redhat.com/show_bug.cgi?id=1366813
Bug ID: 1366813
Summary: Second gluster volume is offline after daemon restart
or server reboot
Product: GlusterFS
Version: 3.8.2
Component: replicate
Assignee: bugs at gluster.org
Reporter: fua82-redhat at yahoo.de
CC: bugs at gluster.org
Created attachment 1190594
--> https://bugzilla.redhat.com/attachment.cgi?id=1190594&action=edit
glustershd.log - VolumeB offline and no PID
Description of problem:
When using two volumes only the first one gets online and receives a PID after
a glusterfs daemon restart or a server reboot. Tested with replicated volumes
only.
Version-Release number of selected component (if applicable):
Debian Jessie, GlusterFS 3.8.2
How reproducible:
Every time.
Steps to Reproduce:
1. Create replicated volumes VolumeA and VolumeB, whose bricks are on Node1 and
Node2.
2. Start both volumes.
3. Restart glusterfs-server.service on Node2 or reboot Node2.
Actual results:
Volume A is fine but Volume B is offline and does not get a PID on Node2.
Expected results:
Volumes A and B are online with a PID.
Additional info:
A "gluster volume start VolumeB force" fixes it.
When Volume A is stopped and you retest it by rebooting Node2 again, Volume B
works as expected (online and with PID).
Logfiles are attached.
Status output of node2 after the reboot:
Status of volume: VolumeA
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick node1:/glusterfs/VolumeA 49155 0 Y 1859
Brick node2:/glusterfs/VolumeA 49153 0 Y 1747
Self-heal Daemon on localhost N/A N/A Y 26188
Self-heal Daemon on node1 N/A N/A Y 21770
Task Status of Volume awstats
------------------------------------------------------------------------------
There are no active volume tasks
Status of volume: VolumeB
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick node1:/glusterfs/VolumeB 49154 0 Y 1973
Brick node2:/glusterfs/VolumeB N/A N/A N N/A
Self-heal Daemon on localhost N/A N/A Y 26188
Self-heal Daemon on node1 N/A N/A Y 21770
Task Status of Volume VolumeB
------------------------------------------------------------------------------
There are no active volume tasks
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
More information about the Bugs
mailing list