[Bugs] [Bug 1367478] New: Second gluster volume is offline after daemon restart or server reboot
bugzilla at redhat.com
bugzilla at redhat.com
Tue Aug 16 13:56:33 UTC 2016
https://bugzilla.redhat.com/show_bug.cgi?id=1367478
Bug ID: 1367478
Summary: Second gluster volume is offline after daemon restart
or server reboot
Product: GlusterFS
Version: mainline
Component: glusterd
Assignee: bugs at gluster.org
Reporter: sbairagy at redhat.com
CC: amukherj at redhat.com, bugs at gluster.org,
fua82-redhat at yahoo.de, sbairagy at redhat.com
Depends On: 1366813
+++ This bug was initially created as a clone of Bug #1366813 +++
Description of problem:
When using two volumes only the first one gets online and receives a PID after
a glusterfs daemon restart or a server reboot. Tested with replicated volumes
only.
Version-Release number of selected component (if applicable):
Debian Jessie, GlusterFS 3.8.2
How reproducible:
Every time.
Steps to Reproduce:
1. Create replicated volumes VolumeA and VolumeB, whose bricks are on Node1 and
Node2.
2. Start both volumes.
3. Restart glusterfs-server.service on Node2 or reboot Node2.
Actual results:
Volume A is fine but Volume B is offline and does not get a PID on Node2.
Expected results:
Volumes A and B are online with a PID.
Additional info:
A "gluster volume start VolumeB force" fixes it.
When Volume A is stopped and you retest it by rebooting Node2 again, Volume B
works as expected (online and with PID).
Logfiles are attached.
Status output of node2 after the reboot:
Status of volume: VolumeA
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick node1:/glusterfs/VolumeA 49155 0 Y 1859
Brick node2:/glusterfs/VolumeA 49153 0 Y 1747
Self-heal Daemon on localhost N/A N/A Y 26188
Self-heal Daemon on node1 N/A N/A Y 21770
Task Status of Volume awstats
------------------------------------------------------------------------------
There are no active volume tasks
Status of volume: VolumeB
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick node1:/glusterfs/VolumeB 49154 0 Y 1973
Brick node2:/glusterfs/VolumeB N/A N/A N N/A
Self-heal Daemon on localhost N/A N/A Y 26188
Self-heal Daemon on node1 N/A N/A Y 21770
Task Status of Volume VolumeB
------------------------------------------------------------------------------
There are no active volume tasks
--- Additional comment from Daniel on 2016-08-12 20:52 EDT ---
--- Additional comment from Atin Mukherjee on 2016-08-16 00:35:23 EDT ---
Thank you for reporting this issue. It's a regression caused by
http://review.gluster.org/14758 which got backported into 3.8.2. We will work
on this to fix it in 3.8.3. Keep testing :)
Referenced Bugs:
https://bugzilla.redhat.com/show_bug.cgi?id=1366813
[Bug 1366813] Second gluster volume is offline after daemon restart or
server reboot
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
More information about the Bugs
mailing list