[Bugs] [Bug 1161893] volume no longer available after update to 3.6.1

bugzilla at redhat.com bugzilla at redhat.com
Mon Nov 10 12:06:01 UTC 2014


https://bugzilla.redhat.com/show_bug.cgi?id=1161893



--- Comment #7 from Lalatendu Mohanty <lmohanty at redhat.com> ---
We tried to reproduce this issue as mentioned below but could not reproduce it.

However the recommended step for upgrade should be as documented at
http://www.gluster.org/community/documentation/index.php/Upgrade_to_3.6

Server 1 : 3.5.2 rhsauto046.lab.eng.blr.redhat.com

Server 2 : 3.5.2 rhsauto057.lab.eng.blr.redhat.com


-- Server 1 -- 

[root at rhsauto046 yum.repos.d]# ps aux | grep gluster
root      5879  0.0  0.5 420208 20580 ?        Ssl  15:51   0:00
/usr/sbin/glusterd --pid-file=/var/run/glusterd.pid
root      5944  0.0  0.5 649980 21612 ?        Ssl  16:31   0:00
/usr/sbin/glusterfsd -s rhsauto046.lab.eng.blr.redhat.com --volfile-id
gv0.rhsauto046.lab.eng.blr.redhat.com.bricks-gv0 -p
/var/lib/glusterd/vols/gv0/run/rhsauto046.lab.eng.blr.redhat.com-bricks-gv0.pid
-S /var/run/ddf23bad40708c9856f322f3de0004ae.socket --brick-name /bricks/gv0 -l
/var/log/glusterfs/bricks/bricks-gv0.log --xlator-option
*-posix.glusterd-uuid=780164d4-15d1-4422-a4ff-9dc7483bbd27 --brick-port 49152
--xlator-option gv0-server.listen-port=49152
root      5958  0.0  1.2 317932 49668 ?        Ssl  16:31   0:00
/usr/sbin/glusterfs -s localhost --volfile-id gluster/nfs -p
/var/lib/glusterd/nfs/run/nfs.pid -l /var/log/glusterfs/nfs.log -S
/var/run/5df21af9684cc4c5cdc8d281c4c0dcde.socket
root      5962  0.0  0.6 335400 27448 ?        Ssl  16:31   0:00
/usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p
/var/lib/glusterd/glustershd/run/glustershd.pid -l
/var/log/glusterfs/glustershd.log -S
/var/run/8255b1d2da6f936c5950eb2747fe20e7.socket --xlator-option
*replicate*.node-uuid=780164d4-15d1-4422-a4ff-9dc7483bbd27
root      6046  0.0  0.0 103252   804 pts/1    S+   17:05   0:00 grep gluster



[root at rhsauto046 yum.repos.d]# gluster v i

Volume Name: gv0
Type: Replicate
Volume ID: 48567fcf-7b41-4906-bd92-a3c52bb2a135
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: rhsauto057.lab.eng.blr.redhat.com:/bricks/gv0
Brick2: rhsauto046.lab.eng.blr.redhat.com:/bricks/gv0



--/snip--

Did a online upgrade (#yum update glusterfs)  to 3.6.1 rpms. After upgrade-

--/snip--

root at rhsauto046 yum.repos.d]# rpm -qa | grep gluster
glusterfs-api-3.6.1-1.el6.x86_64
glusterfs-libs-3.6.1-1.el6.x86_64
glusterfs-cli-3.6.1-1.el6.x86_64
glusterfs-3.6.1-1.el6.x86_64
glusterfs-server-3.6.1-1.el6.x86_64
glusterfs-fuse-3.6.1-1.el6.x86_64


[root at rhsauto046 yum.repos.d]# ps aux | grep gluster
root      6145  4.1  0.4 440076 16628 ?        Ssl  17:06   0:00
/usr/sbin/glusterd --pid-file=/var/run/glusterd.pid
root      6158  0.2  0.5 609700 21400 ?        Ssl  17:06   0:00
/usr/sbin/glusterfsd -s rhsauto046.lab.eng.blr.redhat.com --volfile-id
gv0.rhsauto046.lab.eng.blr.redhat.com.bricks-gv0 -p
/var/lib/glusterd/vols/gv0/run/rhsauto046.lab.eng.blr.redhat.com-bricks-gv0.pid
-S /var/run/ddf23bad40708c9856f322f3de0004ae.socket --brick-name /bricks/gv0 -l
/var/log/glusterfs/bricks/bricks-gv0.log --xlator-option
*-posix.glusterd-uuid=780164d4-15d1-4422-a4ff-9dc7483bbd27 --brick-port 49152
--xlator-option gv0-server.listen-port=49152
root      6169  1.0  1.3 408532 53588 ?        Ssl  17:06   0:00
/usr/sbin/glusterfs -s localhost --volfile-id gluster/nfs -p
/var/lib/glusterd/nfs/run/nfs.pid -l /var/log/glusterfs/nfs.log -S
/var/run/5df21af9684cc4c5cdc8d281c4c0dcde.socket
root      6176  1.0  0.4 442376 19304 ?        Ssl  17:06   0:00
/usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p
/var/lib/glusterd/glustershd/run/glustershd.pid -l
/var/log/glusterfs/glustershd.log -S
/var/run/8255b1d2da6f936c5950eb2747fe20e7.socket --xlator-option
*replicate*.node-uuid=780164d4-15d1-4422-a4ff-9dc7483bbd27
root      6193  0.0  0.0 103252   808 pts/1    S+   17:06   0:00 grep gluster



[root at rhsauto046 yum.repos.d]# gluster v info

Volume Name: gv0
Type: Replicate
Volume ID: 48567fcf-7b41-4906-bd92-a3c52bb2a135
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: rhsauto057.lab.eng.blr.redhat.com:/bricks/gv0
Brick2: rhsauto046.lab.eng.blr.redhat.com:/bricks/gv0

--/sinp--


The volume was mountable and accessible from client and gluster volume info
showed "Started" even after online upgrade. We did this for both servers.

Please note that, the proper upgrade should be as mentioned earlier in the doc.

-- 
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.


More information about the Bugs mailing list