[Gluster-users] Where does Gluster capture the hostnames from?

Joe Julian joe at julianfamily.org
Mon Sep 23 15:36:05 UTC 2019


Perhaps I misread the intent, I apologize if I did. I read "static 
entries" as "ip addresses" which I've seen suggested (from my 
perspective) far too often. /etc/hosts is a valid solution that can 
still adapt if the network needs to evolve.

On 9/23/19 8:29 AM, ROUVRAIS Cedric wrote:
> Hello,
>
> I guess everyone sort of has his perspective on this topic.
>
> I don't want to take this thread on an off-topic conversation (discussing the merits of having a local hosts file) but I do dissent, and therefore had to respond, on the shortcut that using a local etc/host file creates a fixed network configuration that can never adapt as business needs change. I'm running a k8s infrastructure and actually have local conf files, FWIW.
>
> Regards,
>
> Cédric
>
>
> -----Original Message-----
> From: gluster-users-bounces at gluster.org <gluster-users-bounces at gluster.org> On Behalf Of Joe Julian
> Sent: lundi 23 septembre 2019 17:06
> To: gluster-users at gluster.org
> Subject: Re: [Gluster-users] Where does Gluster capture the hostnames from?
>
> I disagree about it being "best practice" to lock yourself in to a fixed network configuration that can never adapt as business needs change.
> There are other resilient ways of ensuring your hostnames resolve consistently (so that your cluster doesn't run loose ;-)).
>
> On 9/23/19 7:38 AM, Strahil wrote:
>> Also,
>>
>> It's more safe to have static entries for your cluster - after all if DNS fails for some reason - you don't want to loose  your cluster.A kind of 'Best Practice'.
>>
>> Best Regards,
>> Strahil NikolovOn Sep 23, 2019 15:01, TomK <tomkcpr at mdevsys.com> wrote:
>>> Do I *really* need specific /etc/hosts entries when I have IPA?
>>>
>>> [root at mdskvm-p01 ~]# cat /etc/hosts
>>> 127.0.0.1   localhost localhost.localdomain localhost4
>>> localhost4.localdomain4
>>> ::1         localhost localhost.localdomain localhost6
>>> localhost6.localdomain6
>>> [root at mdskvm-p01 ~]#
>>>
>>> I really shouldn't need too.  ( Ref below, everything resolves fine.
>>> )
>>>
>>> Cheers,
>>> TK
>>>
>>>
>>> On 9/23/2019 1:32 AM, Strahil wrote:
>>>> Check your /etc/hosts for an entry like:
>>>> 192.168.0.60 mdskvm-p01.nix.mds.xyz mdskvm-p01
>>>>
>>>> Best Regards,
>>>> Strahil NikolovOn Sep 23, 2019 06:58, TomK <tomkcpr at mdevsys.com> wrote:
>>>>> Hey All,
>>>>>
>>>>> Take the two hosts below as example.  One host shows NFS Server on
>>>>> 192.168.0.60 (FQDN is mdskvm-p01.nix.mds.xyz).
>>>>>
>>>>> The other shows mdskvm-p02 (FQDN is mdskvm-p02.nix.mds.xyz).
>>>>>
>>>>> Why is there no consistency or correct hostname resolution?  Where
>>>>> does gluster get the hostnames from?
>>>>>
>>>>>
>>>>> [root at mdskvm-p02 glusterfs]# gluster volume status Status of
>>>>> volume: mdsgv01 Gluster process                             TCP
>>>>> Port  RDMA Port  Online  Pid
>>>>> -------------------------------------------------------------------
>>>>> ----------- Brick mdskvm-p02.nix.mds.xyz:/mnt/p02-d01/g
>>>>> lusterv02                                   49153     0          Y
>>>>> 17503
>>>>> Brick mdskvm-p01.nix.mds.xyz:/mnt/p01-d01/g
>>>>> lusterv01                                   49153     0          Y
>>>>> 15044
>>>>> NFS Server on localhost                     N/A       N/A        N
>>>>> N/A Self-heal Daemon on localhost               N/A       N/A
>>>>> Y
>>>>> 17531
>>>>> NFS Server on 192.168.0.60                  N/A       N/A        N
>>>>> N/A Self-heal Daemon on 192.168.0.60            N/A       N/A
>>>>> Y
>>>>> 15073
>>>>>
>>>>> Task Status of Volume mdsgv01
>>>>> -------------------------------------------------------------------
>>>>> -----------
>>>>> There are no active volume tasks
>>>>>
>>>>> [root at mdskvm-p02 glusterfs]#
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> [root at mdskvm-p01 ~]# gluster volume status Status of volume:
>>>>> mdsgv01 Gluster process                             TCP Port  RDMA
>>>>> Port  Online  Pid
>>>>> -------------------------------------------------------------------
>>>>> ----------- Brick mdskvm-p02.nix.mds.xyz:/mnt/p02-d01/g
>>>>> lusterv02                                   49153     0          Y
>>>>> 17503
>>>>> Brick mdskvm-p01.nix.mds.xyz:/mnt/p01-d01/g
>>>>> lusterv01                                   49153     0          Y
>>>>> 15044
>>>>> NFS Server on localhost                     N/A       N/A        N
>>>>> N/A Self-heal Daemon on localhost               N/A       N/A
>>>>> Y
>>>>> 15073
>>>>> NFS Server on mdskvm-p02                    N/A       N/A        N
>>>>> N/A Self-heal Daemon on mdskvm-p02              N/A       N/A
>>>>> Y
>>>>> 17531
>>>>>
>>>>> Task Status of Volume mdsgv01
>>>>> -------------------------------------------------------------------
>>>>> -----------
>>>>> There are no active volume tasks
>>>>>
>>>>> [root at mdskvm-p01 ~]#
>>>>>
>>>>>
>>>>>
>>>>> But when verifying everything all seems fine:
>>>>>
>>>>>
>>>>> (1):
>>>>> [root at mdskvm-p01 glusterfs]# dig -x 192.168.0.39 ;; QUESTION
>>>>> SECTION:
>>>>> ;39.0.168.192.in-addr.arpa.     IN      PTR
>>>>>
>>>>> ;; ANSWER SECTION:
>>>>> 39.0.168.192.in-addr.arpa. 1200 IN      PTR     mdskvm-p02.nix.mds.xyz.
>>>>> [root at mdskvm-p01 glusterfs]# hostname -f mdskvm-p01.nix.mds.xyz
>>>>> [root at mdskvm-p01 glusterfs]# hostname -s
>>>>> mdskvm-p01
>>>>> [root at mdskvm-p01 glusterfs]# hostname mdskvm-p01.nix.mds.xyz
>>>>> [root at mdskvm-p01 glusterfs]#
>>>>>
>>>>>
>>>>> (2):
>>>>>
>>>>> [root at mdskvm-p02 glusterfs]# dig -x 192.168.0.60 ;; QUESTION
>>>>> SECTION:
>>>>> ;60.0.168.192.in-addr.arpa.     IN      PTR
>>>>>
>>>>> ;; ANSWER SECTION:
>>>>> 60.0.168.192.in-addr.arpa. 1200 IN      PTR     mdskvm-p01.nix.mds.xyz.
>>>>>
>>>>> [root at mdskvm-p02 glusterfs]# hostname -s
>>>>> mdskvm-p02
>>>>> [root at mdskvm-p02 glusterfs]# hostname -f mdskvm-p02.nix.mds.xyz
>>>>> [root at mdskvm-p02 glusterfs]# hostname mdskvm-p02.nix.mds.xyz
>>>>> [root at mdskvm-p02 glusterfs]#
>>>>>
>>>>>
>>>>> Gluster version used is:
>>>>>
>>>>> [root at mdskvm-p01 glusterfs]# rpm -aq|grep -Ei gluster
>>>>> glusterfs-server-3.12.15-1.el7.x86_64
>>>>> glusterfs-client-xlators-3.12.15-1.el7.x86_64
>>>>> glusterfs-rdma-3.12.15-1.el7.x86_64
>>>>> glusterfs-3.12.15-1.el7.x86_64
>>>>> glusterfs-events-3.12.15-1.el7.x86_64
>>>>> libvirt-daemon-driver-storage-gluster-4.5.0-10.el7_6.12.x86_64
>>>>> glusterfs-libs-3.12.15-1.el7.x86_64
>>>>> glusterfs-fuse-3.12.15-1.el7.x86_64
>>>>> glusterfs-geo-replication-3.12.15-1.el7.x86_64
>>>>> python2-gluster-3.12.15-1.el7.x86_64
>> ________
>>
>> Community Meeting Calendar:
>>
>> APAC Schedule -
>> Every 2nd and 4th Tuesday at 11:30 AM IST
>> Bridge: https://bluejeans.com/118564314
>>
>> NA/EMEA Schedule -
>> Every 1st and 3rd Tuesday at 01:00 PM EDT
>> Bridge: https://bluejeans.com/118564314
>>
>> Gluster-users mailing list
>> Gluster-users at gluster.org
>> https://lists.gluster.org/mailman/listinfo/gluster-users
> ________
>
> Community Meeting Calendar:
>
> APAC Schedule -
> Every 2nd and 4th Tuesday at 11:30 AM IST
> Bridge: https://bluejeans.com/118564314
>
> NA/EMEA Schedule -
> Every 1st and 3rd Tuesday at 01:00 PM EDT
> Bridge: https://bluejeans.com/118564314
>
> Gluster-users mailing list
> Gluster-users at gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users
> ________
>
> Community Meeting Calendar:
>
> APAC Schedule -
> Every 2nd and 4th Tuesday at 11:30 AM IST
> Bridge: https://bluejeans.com/118564314
>
> NA/EMEA Schedule -
> Every 1st and 3rd Tuesday at 01:00 PM EDT
> Bridge: https://bluejeans.com/118564314
>
> Gluster-users mailing list
> Gluster-users at gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users


More information about the Gluster-users mailing list