[Bugs] [Bug 1161893] volume no longer available after update to 3.6.1

bugzilla at redhat.com bugzilla at redhat.com
Tue Nov 11 13:09:51 UTC 2014


https://bugzilla.redhat.com/show_bug.cgi?id=1161893



--- Comment #10 from Mauro M. <mm13 at ezplanet.net> ---
I have now re-installed 3.6.1 for the purpose of providing more information.
Here are the steps followed:

1)removed all 3.5.2 packages, remove /var/lib/glusterd /var/log/glusterfs to
prepare for a fresh install
2) installed 3.6.2 packages:
glusterfs-3.6.1-1.el6.x86_64
glusterfs-api-3.6.1-1.el6.x86_64
glusterfs-cli-3.6.1-1.el6.x86_64
glusterfs-fuse-3.6.1-1.el6.x86_64
glusterfs-geo-replication-3.6.1-1.el6.x86_64
glusterfs-libs-3.6.1-1.el6.x86_64
glusterfs-rdma-3.6.1-1.el6.x86_64
glusterfs-server-3.6.1-1.el6.x86_64

and started daemon using:
service glusterd start

3) created new directory /brick1/gv1 on two physical nodes (node1 and node2)
4) gluster volume create gv1 replica 2 node1:/brick1/gv1 node2:/brick1/gv1
5) gluster volume start gv1
6) gluster volume set gv1 nfs.disable on
7) mkdir /mnt/gv1
8) mount -t glusterfs node1:/gv1 /mnt/gv1 (on both nodes)
9) copied some data in /mnt/gv1, all works replication works on both nodes
10) now shutdown glusterfs services on both nodes (this is quicker than a full
reboot as in message 2 and has the same effect):
umount /mnt/gv1
service glusterfsd stop [I do not know how did this start I didn't]
service glusterd stop
killed last remaining glusterfs process that did not want to die

11) on node1 only:
# service glusterd start  [OK]
# gluster volume info
Volume Name: gv1
Type: Replicate
Volume ID: 5de2ebc7-b4d6-44c7-8137-211caa286e87
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: node1:/brick1/gv1
Brick2: node2:/brick1/gv1
Options Reconfigured:
nfs.disable: on
# gluster volume status
Status of volume: gv1
Gluster process                        Port    Online    Pid
------------------------------------------------------------------------------
Brick sirius:/brick1/gv1                N/A    N    N/A
Self-heal Daemon on localhost                N/A    N    N/A

Task Status of Volume gv1
------------------------------------------------------------------------------
There are no active volume tasks

# gluster volume start gv1 
volume start: gv1: failed: Volume gv1 already started


At this point the volume is not mountable. The same happens when I reboot only
it takes several minutes for glusterd to start. I will now revert back to 3.5.2
as I have only these servers. I will keep the logs, please let me know which
one you want and how I get them to you. Thank you.

-- 
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.


More information about the Bugs mailing list