[Gluster-users] Gluster 3.7.0 released

Atin Mukherjee amukherj at redhat.com
Wed May 27 17:17:35 UTC 2015



On 05/27/2015 07:33 PM, Ted Miller wrote:
> responses below
> Ted Miller
> 
> On 5/26/2015 12:01 AM, Atin Mukherjee wrote:
>>
>> On 05/26/2015 03:12 AM, Ted Miller wrote:
>>> From: Niels de Vos <ndevos at redhat.com>
>>> Sent: Monday, May 25, 2015 4:44 PM
>>>
>>> On Mon, May 25, 2015 at 06:49:26PM +0000, Ted Miller wrote:
>>>> ________________________________
>>>> From: Humble Devassy Chirammal <humble.devassy at gmail.com>
>>>> Sent: Monday, May 18, 2015 9:37 AM
>>>> Hi All,
>>>>
>>>> GlusterFS 3.7.0 RPMs for RHEL, CentOS, Fedora and packages for
>>>> Debian are available at
>>>> download.gluster.org<http://download.gluster.org> [1].
>>>>
>>>> [1] http://download.gluster.org/pub/gluster/glusterfs/3.7/3.7.0/
>>>>
>>>> --Humble
>>>>
>>>>
>>>> On Thu, May 14, 2015 at 2:49 PM, Vijay Bellur
>>>> <vbellur at redhat.com<mailto:vbellur at redhat.com>> wrote:
>>>>
>>>> Hi All,
>>>>
>>>> I am happy to announce that Gluster 3.7.0 is now generally
>>>> available. 3.7.0 contains several
>>>>
>>>> [snip]
>>>>
>>>> Cheers,
>>>> Vijay
>>>>
>>>> [snip]
>>>>
>>>> What happened to packages for RHEL/Centos 5?  I have the (probably
>>>> unusual--added gluster to existing servers) setup of running a replica
>>>> 3 cluster where two nodes run on Centos 6 and one is still on Centos
>>>> 5.  This is a personal setup, and I have been using
>>>> http://download.gluster.org/pub/gluster/glusterfs/LATEST/EPEL.repo/epel-5/x86_64/repodata/repomod.xml
>>>>
>>>> as my repo.  It has worked fine for a while, but this time the two
>>>> Centos 6 nodes updated to 3.7, but the Centos 5 node got left behind
>>>> at 3.6.3.
>>> Packages for RHEL/CentOS-5 are not available yet. These will follow
>>> later. Thare are some changes needed to be able to build the packages on
>>> EL5. Because we are currently stabilizing our CI/regression tests, we do
>>> not merge any other changes. Until we provide packages in our
>>> repository, you could apply patch http://review.gluster.org/10803
>>> yourself and build the EL5 version. I expect that we will do a release
>>> in 2-3 weeks which will have EL5 RPMs too.
>>>
>>> I have no idea about the problem below, it sounds like something the
>>> GlusterD developers could help with.
>>>
>>> Niels
>>>
>>>> Command 'gluster volume status' on the C5 machine makes everything
>>>> look fine:
>>>>
>>>> Status of volume: ISO2
>>>> Gluster process                                       Port   
>>>> Online  Pid
>>>> ------------------------------------------------------------------------------
>>>>
>>>> Brick 10.x.x.2:/bricks/01/iso2                        49162  
>>>> Y       4679
>>>> Brick 10.x.x.4:/bricks/01/iso2                        49183  
>>>> Y       6447
>>>> Brick 10.x.x.9:/bricks/01/iso2                        49169  
>>>> Y       1985
>>>>
>>>> But the same command on either of the C6 machines shows the C5 machine
>>>> (10.x.x.2) missing in action (though it does recognize that there are
>>>> NFS and heal daemons there):
>>>>
>>>> Status of volume: ISO2
>>>> Gluster process                             TCP Port  RDMA Port 
>>>> Online  Pid
>>>> ------------------------------------------------------------------------------
>>>>
>>>> Brick 10.41.65.4:/bricks/01/iso2            49183     0         
>>>> Y       6447
>>>> Brick 10.41.65.9:/bricks/01/iso2            49169     0         
>>>> Y       1985
>>>> NFS Server on localhost                     2049      0         
>>>> Y       2279
>>>> Self-heal Daemon on localhost               N/A       N/A       
>>>> Y       2754
>>>> NFS Server on 10.41.65.2                    2049      0         
>>>> Y       4757
>>>> Self-heal Daemon on 10.41.65.2              N/A       N/A       
>>>> Y       4764
>>>> NFS Server on 10.41.65.4                    2049      0         
>>>> Y       6543
>>>> Self-heal Daemon on 10.41.65.4              N/A       N/A       
>>>> Y       6551
>>>>
>>>> So, is this just an oversight (I hope), or has support for C5 been
>>>> dropped?
>>>> If support for C5 is gone, how do I downgrade my Centos6 machines back
>>>> to 3.6.x? (I know how to change the repo, but the actual sequence of
>>>> yum commands and gluster commands is unknown to me).
>> Could you attach the glusterd log file of 10.x.x.2 machine
> attached as etc-glusterfs-glusterd.vol.log.newer.2, starting from last
> machine reboot
>>   and the node from where you triggered volume status.
> attached as etc-glusterfs-glusterd.vol.log.newer4 starting same time as
> .2 log
>> Could you also share gluster volume info output of all the nodes?
> I have several volumes, so I chose the one that shows up first on the
> listings:
> 
> *from 10.41.65.2:*
> 
> [root at office2 /var/log/glusterfs]$ gluster volume info
> 
> Volume Name: ISO2
> Type: Replicate
> Volume ID: 090da4b3-c666-41fe-8283-2c029228b3f7
> Status: Started
> Number of Bricks: 1 x 3 = 3
> Transport-type: tcp
> Bricks:
> Brick1: 10.41.65.2:/bricks/01/iso2
> Brick2: 10.41.65.4:/bricks/01/iso2
> Brick3: 10.41.65.9:/bricks/01/iso2
> 
> [root at office2 /var/log/glusterfs]$ gluster volume status ISO2
> Status of volume: ISO2
> Gluster process Port    Online  Pid
> ------------------------------------------------------------------------------
> 
> Brick 10.41.65.2:/bricks/01/iso2 49162   Y       4463
> Brick 10.41.65.4:/bricks/01/iso2 49183   Y       6447
> Brick 10.41.65.9:/bricks/01/iso2 49169   Y       1985
> NFS Server on localhost 2049    Y       4536
> Self-heal Daemon on localhost N/A     Y       4543
> NFS Server on 10.41.65.9 2049    Y       2279
> Self-heal Daemon on 10.41.65.9 N/A     Y       2754
> NFS Server on 10.41.65.4 2049    Y       6543
> Self-heal Daemon on 10.41.65.4 N/A     Y       6551
> 
> Task Status of Volume ISO2
> ------------------------------------------------------------------------------
> 
> There are no active volume tasks
> 
> [root at office2 ~]$ gluster peer status
> Number of Peers: 2
> 
> Hostname: 10.41.65.9
> Uuid: cf2ae9c7-833e-4a73-a996-e72158011c69
> State: Peer in Cluster (Connected)
> 
> Hostname: 10.41.65.4
> Uuid: bd3ca8b7-f2da-44ce-8739-c0db5e40158c
> State: Peer in Cluster (Connected)
> 
> 
> *from 10.41.65.4:*
> 
> [root at office4b ~]# gluster volume info ISO2
> 
> Volume Name: ISO2
> Type: Replicate
> Volume ID: 090da4b3-c666-41fe-8283-2c029228b3f7
> Status: Started
> Number of Bricks: 1 x 3 = 3
> Transport-type: tcp
> Bricks:
> Brick1: 10.41.65.2:/bricks/01/iso2
> Brick2: 10.41.65.4:/bricks/01/iso2
> Brick3: 10.41.65.9:/bricks/01/iso2
> 
> [root at office4b ~]# gluster volume status ISO2
> Status of volume: ISO2
> Gluster process                             TCP Port  RDMA Port  Online 
> Pid
> ------------------------------------------------------------------------------
> 
> Brick 10.41.65.4:/bricks/01/iso2            49183 0          Y       6447
> Brick 10.41.65.9:/bricks/01/iso2            49169 0          Y       1985
> NFS Server on localhost                     2049 0          Y       6543
> Self-heal Daemon on localhost               N/A N/A        Y       6551
> NFS Server on 10.41.65.2                    2049 0          Y       4536
> Self-heal Daemon on 10.41.65.2              N/A N/A        Y       4543
> NFS Server on 10.41.65.9                    2049 0          Y       2279
> Self-heal Daemon on 10.41.65.9              N/A N/A        Y       2754
> 
> Task Status of Volume ISO2
> ------------------------------------------------------------------------------
> 
> There are no active volume tasks
> 
> [root at office4b ~]# gluster peer status
> Number of Peers: 2
> 
> Hostname: 10.41.65.2
> Uuid: 4a53ed8b-2b41-4a3c-acf7-2dabec431f57
> State: Peer in Cluster (Connected)
> 
> Hostname: 10.41.65.9
> Uuid: cf2ae9c7-833e-4a73-a996-e72158011c69
> State: Peer in Cluster (Connected)
> 
> 
> *from 10.41.65.9:*
> 
> [root at office9 ~]$ gluster volume info ISO2
> 
> Volume Name: ISO2
> Type: Replicate
> Volume ID: 090da4b3-c666-41fe-8283-2c029228b3f7
> Status: Started
> Number of Bricks: 1 x 3 = 3
> Transport-type: tcp
> Bricks:
> Brick1: 10.41.65.2:/bricks/01/iso2
> Brick2: 10.41.65.4:/bricks/01/iso2
> Brick3: 10.41.65.9:/bricks/01/iso2
> [root at office9 ~]$ gluster volume status ISO2
> Status of volume: ISO2
> Gluster process                             TCP Port  RDMA Port Online  Pid
> ------------------------------------------------------------------------------
> 
> Brick 10.41.65.4:/bricks/01/iso2            49183     0 Y       6447
> Brick 10.41.65.9:/bricks/01/iso2            49169     0 Y       1985
> NFS Server on localhost                     2049      0 Y       2279
> Self-heal Daemon on localhost               N/A       N/A Y       2754
> NFS Server on 10.41.65.2                    2049      0 Y       4536
> Self-heal Daemon on 10.41.65.2              N/A       N/A Y       4543
> NFS Server on 10.41.65.4                    2049      0 Y       6543
> Self-heal Daemon on 10.41.65.4              N/A       N/A Y       6551
> 
> Task Status of Volume ISO2
> ------------------------------------------------------------------------------
> 
> There are no active volume tasks
> 
> [root at office9 ~]$ gluster peer status
> Number of Peers: 2
> 
> Hostname: 10.41.65.2
> Uuid: 4a53ed8b-2b41-4a3c-acf7-2dabec431f57
> State: Peer in Cluster (Connected)
> 
> Hostname: 10.41.65.4
> Uuid: bd3ca8b7-f2da-44ce-8739-c0db5e40158c
> State: Peer in Cluster (Connected)
I think you just pasted snippet of the log sequence, its hard to
identify anything from it. Would it be possible for you to kill and
restart glusterd with glusterd -LDEBUG at 2 & 4 and share the complete
log file for both of them?
>>>> Ted Miller
>>>> Elkhart, IN, USA
>>>
>>> Thanks for the information.  As long as I know it is coming, I can
>>> improvise and hang on.
>>>
>>> I am assuming that the problem with the .2 machine not being seen is
>>> a result of running a cluster with a version split.
>>>
>>> Ted Miller
>>> _______________________________________________
>>> Gluster-users mailing list
>>> Gluster-users at gluster.org
>>> http://www.gluster.org/mailman/listinfo/gluster-users
>>>
> 

-- 
~Atin


More information about the Gluster-users mailing list