[Gluster-users] Gluster 3.7.0 released

Atin Mukherjee amukherj at redhat.com
Tue Jun 2 05:12:31 UTC 2015



On 06/01/2015 09:01 PM, Ted Miller wrote:
> On 5/27/2015 1:17 PM, Atin Mukherjee wrote:
>> On 05/27/2015 07:33 PM, Ted Miller wrote:
>>> responses below
>>> Ted Miller
>>>
>>> On 5/26/2015 12:01 AM, Atin Mukherjee wrote:
>>>> On 05/26/2015 03:12 AM, Ted Miller wrote:
>>>>> From: Niels de Vos <ndevos at redhat.com>
>>>>> Sent: Monday, May 25, 2015 4:44 PM
>>>>>
>>>>> On Mon, May 25, 2015 at 06:49:26PM +0000, Ted Miller wrote:
>>>>>> From: Humble Devassy Chirammal <humble.devassy at gmail.com>
>>>>>> Sent: Monday, May 18, 2015 9:37 AM
>>>>>> Hi All,
>>>>>>
>>>>>> GlusterFS 3.7.0 RPMs for RHEL, CentOS, Fedora and packages for
>>>>>> Debian are available at
>>>>>> download.gluster.org<http://download.gluster.org> [1].
>>>>>>
>>>>>> [1] http://download.gluster.org/pub/gluster/glusterfs/3.7/3.7.0/
>>>>>>
>>>>>> --Humble
>>>>>>
>>>>>>
>>>>>> On Thu, May 14, 2015 at 2:49 PM, Vijay Bellur
>>>>>> <vbellur at redhat.com<mailto:vbellur at redhat.com>> wrote:
>>>>>>
>>>>>> Hi All,
>>>>>>
>>>>>> I am happy to announce that Gluster 3.7.0 is now generally
>>>>>> available. 3.7.0 contains several
>>>>>>
>>>>>> [snip]
>>>>>>
>>>>>> Cheers,
>>>>>> Vijay
>>>>>>
>>>>>> [snip]
>>>>> [snip]
>>>>>
>>>>> I have no idea about the problem below, it sounds like something the
>>>>> GlusterD developers could help with.
>>>>>
>>>>> Niels
>>>>>
>>>>>> Command 'gluster volume status' on the C5 machine makes everything
>>>>>> look fine:
>>>>>>
>>>>>> Status of volume: ISO2
>>>>>> Gluster process                                       Port
>>>>>> Online  Pid
>>>>>> ------------------------------------------------------------------------------
>>>>>>
>>>>>>
>>>>>> Brick 10.x.x.2:/bricks/01/iso2                        49162
>>>>>> Y       4679
>>>>>> Brick 10.x.x.4:/bricks/01/iso2                        49183
>>>>>> Y       6447
>>>>>> Brick 10.x.x.9:/bricks/01/iso2                        49169
>>>>>> Y       1985
>>>>>>
>>>>>> But the same command on either of the C6 machines shows the C5
>>>>>> machine
>>>>>> (10.x.x.2) missing in action (though it does recognize that there are
>>>>>> NFS and heal daemons there):
>>>>>>
>>>>>> Status of volume: ISO2
>>>>>> Gluster process                             TCP Port  RDMA Port
>>>>>> Online  Pid
>>>>>> ------------------------------------------------------------------------------
>>>>>>
>>>>>>
>>>>>> Brick 10.41.65.4:/bricks/01/iso2            49183     0
>>>>>> Y       6447
>>>>>> Brick 10.41.65.9:/bricks/01/iso2            49169     0
>>>>>> Y       1985
>>>>>> NFS Server on localhost                     2049      0
>>>>>> Y       2279
>>>>>> Self-heal Daemon on localhost               N/A       N/A
>>>>>> Y       2754
>>>>>> NFS Server on 10.41.65.2                    2049      0
>>>>>> Y       4757
>>>>>> Self-heal Daemon on 10.41.65.2              N/A       N/A
>>>>>> Y       4764
>>>>>> NFS Server on 10.41.65.4                    2049      0
>>>>>> Y       6543
>>>>>> Self-heal Daemon on 10.41.65.4              N/A       N/A
>>>>>> Y       6551
>>>>>>
>>>>>> So, is this just an oversight (I hope), or has support for C5 been
>>>>>> dropped?
>>>>>> If support for C5 is gone, how do I downgrade my Centos6 machines
>>>>>> back
>>>>>> to 3.6.x? (I know how to change the repo, but the actual sequence of
>>>>>> yum commands and gluster commands is unknown to me).
>>>> Could you attach the glusterd log file of 10.x.x.2 machine
>>> attached as etc-glusterfs-glusterd.vol.log.newer.2, starting from last
>>> machine reboot
>>>>    and the node from where you triggered volume status.
>>> attached as etc-glusterfs-glusterd.vol.log.newer4 starting same time as
>>> .2 log
>>>> Could you also share gluster volume info output of all the nodes?
>>> I have several volumes, so I chose the one that shows up first on the
>>> listings:
>>>
>>> *from 10.41.65.2:*
>>>
>>> [root at office2 /var/log/glusterfs]$ gluster volume info
>>>
>>> Volume Name: ISO2
>>> Type: Replicate
>>> Volume ID: 090da4b3-c666-41fe-8283-2c029228b3f7
>>> Status: Started
>>> Number of Bricks: 1 x 3 = 3
>>> Transport-type: tcp
>>> Bricks:
>>> Brick1: 10.41.65.2:/bricks/01/iso2
>>> Brick2: 10.41.65.4:/bricks/01/iso2
>>> Brick3: 10.41.65.9:/bricks/01/iso2
>>>
>>> [root at office2 /var/log/glusterfs]$ gluster volume status ISO2
>>> Status of volume: ISO2
>>> Gluster process Port    Online  Pid
>>> ------------------------------------------------------------------------------
>>>
>>>
>>> Brick 10.41.65.2:/bricks/01/iso2 49162   Y       4463
>>> Brick 10.41.65.4:/bricks/01/iso2 49183   Y       6447
>>> Brick 10.41.65.9:/bricks/01/iso2 49169   Y       1985
>>> NFS Server on localhost 2049    Y       4536
>>> Self-heal Daemon on localhost N/A     Y       4543
>>> NFS Server on 10.41.65.9 2049    Y       2279
>>> Self-heal Daemon on 10.41.65.9 N/A     Y       2754
>>> NFS Server on 10.41.65.4 2049    Y       6543
>>> Self-heal Daemon on 10.41.65.4 N/A     Y       6551
>>>
>>> Task Status of Volume ISO2
>>> ------------------------------------------------------------------------------
>>>
>>>
>>> There are no active volume tasks
>>>
>>> [root at office2 ~]$ gluster peer status
>>> Number of Peers: 2
>>>
>>> Hostname: 10.41.65.9
>>> Uuid: cf2ae9c7-833e-4a73-a996-e72158011c69
>>> State: Peer in Cluster (Connected)
>>>
>>> Hostname: 10.41.65.4
>>> Uuid: bd3ca8b7-f2da-44ce-8739-c0db5e40158c
>>> State: Peer in Cluster (Connected)
>>>
>>>
>>> *from 10.41.65.4:*
>>>
>>> [root at office4b ~]# gluster volume info ISO2
>>>
>>> Volume Name: ISO2
>>> Type: Replicate
>>> Volume ID: 090da4b3-c666-41fe-8283-2c029228b3f7
>>> Status: Started
>>> Number of Bricks: 1 x 3 = 3
>>> Transport-type: tcp
>>> Bricks:
>>> Brick1: 10.41.65.2:/bricks/01/iso2
>>> Brick2: 10.41.65.4:/bricks/01/iso2
>>> Brick3: 10.41.65.9:/bricks/01/iso2
>>>
>>> [root at office4b ~]# gluster volume status ISO2
>>> Status of volume: ISO2
>>> Gluster process                             TCP Port  RDMA Port  Online
>>> Pid
>>> ------------------------------------------------------------------------------
>>>
>>>
>>> Brick 10.41.65.4:/bricks/01/iso2            49183 0          Y      
>>> 6447
>>> Brick 10.41.65.9:/bricks/01/iso2            49169 0          Y      
>>> 1985
>>> NFS Server on localhost                     2049 0          Y       6543
>>> Self-heal Daemon on localhost               N/A N/A        Y       6551
>>> NFS Server on 10.41.65.2                    2049 0          Y       4536
>>> Self-heal Daemon on 10.41.65.2              N/A N/A        Y       4543
>>> NFS Server on 10.41.65.9                    2049 0          Y       2279
>>> Self-heal Daemon on 10.41.65.9              N/A N/A        Y       2754
>>>
>>> Task Status of Volume ISO2
>>> ------------------------------------------------------------------------------
>>>
>>>
>>> There are no active volume tasks
>>>
>>> [root at office4b ~]# gluster peer status
>>> Number of Peers: 2
>>>
>>> Hostname: 10.41.65.2
>>> Uuid: 4a53ed8b-2b41-4a3c-acf7-2dabec431f57
>>> State: Peer in Cluster (Connected)
>>>
>>> Hostname: 10.41.65.9
>>> Uuid: cf2ae9c7-833e-4a73-a996-e72158011c69
>>> State: Peer in Cluster (Connected)
>>>
>>>
>>> *from 10.41.65.9:*
>>>
>>> [root at office9 ~]$ gluster volume info ISO2
>>>
>>> Volume Name: ISO2
>>> Type: Replicate
>>> Volume ID: 090da4b3-c666-41fe-8283-2c029228b3f7
>>> Status: Started
>>> Number of Bricks: 1 x 3 = 3
>>> Transport-type: tcp
>>> Bricks:
>>> Brick1: 10.41.65.2:/bricks/01/iso2
>>> Brick2: 10.41.65.4:/bricks/01/iso2
>>> Brick3: 10.41.65.9:/bricks/01/iso2
>>> [root at office9 ~]$ gluster volume status ISO2
>>> Status of volume: ISO2
>>> Gluster process                             TCP Port  RDMA Port
>>> Online  Pid
>>> ------------------------------------------------------------------------------
>>>
>>>
>>> Brick 10.41.65.4:/bricks/01/iso2            49183     0 Y       6447
>>> Brick 10.41.65.9:/bricks/01/iso2            49169     0 Y       1985
>>> NFS Server on localhost                     2049      0 Y       2279
>>> Self-heal Daemon on localhost               N/A       N/A Y       2754
>>> NFS Server on 10.41.65.2                    2049      0 Y       4536
>>> Self-heal Daemon on 10.41.65.2              N/A       N/A Y       4543
>>> NFS Server on 10.41.65.4                    2049      0 Y       6543
>>> Self-heal Daemon on 10.41.65.4              N/A       N/A Y       6551
>>>
>>> Task Status of Volume ISO2
>>> ------------------------------------------------------------------------------
>>>
>>>
>>> There are no active volume tasks
>>>
>>> [root at office9 ~]$ gluster peer status
>>> Number of Peers: 2
>>>
>>> Hostname: 10.41.65.2
>>> Uuid: 4a53ed8b-2b41-4a3c-acf7-2dabec431f57
>>> State: Peer in Cluster (Connected)
>>>
>>> Hostname: 10.41.65.4
>>> Uuid: bd3ca8b7-f2da-44ce-8739-c0db5e40158c
>>> State: Peer in Cluster (Connected)
>> Would it be possible for you to kill and
>> restart glusterd with glusterd -LDEBUG at 2 & 4 and share the complete
>> log file for both of them?
> First--an observation:
> _The issue seems to be with the gluster volume status command_ on the
> 3.7 machines, not with the actual gluster software.  I use gkrellm to
> monitor all three machines on my workstation (which is node 4 {of nodes
> 2,4,9}).  Over the weekend I dumped some files onto the gluster file
> system from node 4 (version 3.7), and noticed that I saw the usual
> pattern of network and disk activity /including on node 2/ (version 3.6).
> 
> I had created a new directory, so I did an ls on the corresponding
> directory on the brick, and found that my new directory had indeed been
> created and populated with files on all three nodes.  So, it looks like
> glusterd and the 3.7 client are working properly, but the 'gluster
> volume status' command is not handling the 3.6 node correctly.  "gluster
> volume status" shows the NFS server and self-heal daemon information for
> the 3.6 node, but not the Brick information.
> 
> The 3.6 node shows status for all three nodes correctly (as far as I can
> tell).
> 
> I am attaching the log files.  They had both been rotated within the
> previous 48 hours, so I did:
> 
> on 2 + 4: service glusterd stop
> on 4: glusterd -LDEBUG
> on 2: glusterd -LDEBUG
> after a short time I copied the logs for attachment.
Could you execute gluster volume status from 4 and then attach the logs?
> 

-- 
~Atin


More information about the Gluster-users mailing list