[Gluster-users] Where does Gluster capture the hostnames from?
Strahil
hunter86_bg at yahoo.com
Mon Sep 23 14:38:07 UTC 2019
Also,
It's more safe to have static entries for your cluster - after all if DNS fails for some reason - you don't want to loose your cluster.A kind of 'Best Practice'.
Best Regards,
Strahil NikolovOn Sep 23, 2019 15:01, TomK <tomkcpr at mdevsys.com> wrote:
>
> Do I *really* need specific /etc/hosts entries when I have IPA?
>
> [root at mdskvm-p01 ~]# cat /etc/hosts
> 127.0.0.1 localhost localhost.localdomain localhost4
> localhost4.localdomain4
> ::1 localhost localhost.localdomain localhost6
> localhost6.localdomain6
> [root at mdskvm-p01 ~]#
>
> I really shouldn't need too. ( Ref below, everything resolves fine. )
>
> Cheers,
> TK
>
>
> On 9/23/2019 1:32 AM, Strahil wrote:
> > Check your /etc/hosts for an entry like:
> > 192.168.0.60 mdskvm-p01.nix.mds.xyz mdskvm-p01
> >
> > Best Regards,
> > Strahil NikolovOn Sep 23, 2019 06:58, TomK <tomkcpr at mdevsys.com> wrote:
> >>
> >> Hey All,
> >>
> >> Take the two hosts below as example. One host shows NFS Server on
> >> 192.168.0.60 (FQDN is mdskvm-p01.nix.mds.xyz).
> >>
> >> The other shows mdskvm-p02 (FQDN is mdskvm-p02.nix.mds.xyz).
> >>
> >> Why is there no consistency or correct hostname resolution? Where does
> >> gluster get the hostnames from?
> >>
> >>
> >> [root at mdskvm-p02 glusterfs]# gluster volume status
> >> Status of volume: mdsgv01
> >> Gluster process TCP Port RDMA Port Online Pid
> >> ------------------------------------------------------------------------------
> >> Brick mdskvm-p02.nix.mds.xyz:/mnt/p02-d01/g
> >> lusterv02 49153 0 Y
> >> 17503
> >> Brick mdskvm-p01.nix.mds.xyz:/mnt/p01-d01/g
> >> lusterv01 49153 0 Y
> >> 15044
> >> NFS Server on localhost N/A N/A N N/A
> >> Self-heal Daemon on localhost N/A N/A Y
> >> 17531
> >> NFS Server on 192.168.0.60 N/A N/A N N/A
> >> Self-heal Daemon on 192.168.0.60 N/A N/A Y
> >> 15073
> >>
> >> Task Status of Volume mdsgv01
> >> ------------------------------------------------------------------------------
> >> There are no active volume tasks
> >>
> >> [root at mdskvm-p02 glusterfs]#
> >>
> >>
> >>
> >>
> >> [root at mdskvm-p01 ~]# gluster volume status
> >> Status of volume: mdsgv01
> >> Gluster process TCP Port RDMA Port Online Pid
> >> ------------------------------------------------------------------------------
> >> Brick mdskvm-p02.nix.mds.xyz:/mnt/p02-d01/g
> >> lusterv02 49153 0 Y
> >> 17503
> >> Brick mdskvm-p01.nix.mds.xyz:/mnt/p01-d01/g
> >> lusterv01 49153 0 Y
> >> 15044
> >> NFS Server on localhost N/A N/A N N/A
> >> Self-heal Daemon on localhost N/A N/A Y
> >> 15073
> >> NFS Server on mdskvm-p02 N/A N/A N N/A
> >> Self-heal Daemon on mdskvm-p02 N/A N/A Y
> >> 17531
> >>
> >> Task Status of Volume mdsgv01
> >> ------------------------------------------------------------------------------
> >> There are no active volume tasks
> >>
> >> [root at mdskvm-p01 ~]#
> >>
> >>
> >>
> >> But when verifying everything all seems fine:
> >>
> >>
> >> (1):
> >> [root at mdskvm-p01 glusterfs]# dig -x 192.168.0.39
> >> ;; QUESTION SECTION:
> >> ;39.0.168.192.in-addr.arpa. IN PTR
> >>
> >> ;; ANSWER SECTION:
> >> 39.0.168.192.in-addr.arpa. 1200 IN PTR mdskvm-p02.nix.mds.xyz.
> >> [root at mdskvm-p01 glusterfs]# hostname -f
> >> mdskvm-p01.nix.mds.xyz
> >> [root at mdskvm-p01 glusterfs]# hostname -s
> >> mdskvm-p01
> >> [root at mdskvm-p01 glusterfs]# hostname
> >> mdskvm-p01.nix.mds.xyz
> >> [root at mdskvm-p01 glusterfs]#
> >>
> >>
> >> (2):
> >>
> >> [root at mdskvm-p02 glusterfs]# dig -x 192.168.0.60
> >> ;; QUESTION SECTION:
> >> ;60.0.168.192.in-addr.arpa. IN PTR
> >>
> >> ;; ANSWER SECTION:
> >> 60.0.168.192.in-addr.arpa. 1200 IN PTR mdskvm-p01.nix.mds.xyz.
> >>
> >> [root at mdskvm-p02 glusterfs]# hostname -s
> >> mdskvm-p02
> >> [root at mdskvm-p02 glusterfs]# hostname -f
> >> mdskvm-p02.nix.mds.xyz
> >> [root at mdskvm-p02 glusterfs]# hostname
> >> mdskvm-p02.nix.mds.xyz
> >> [root at mdskvm-p02 glusterfs]#
> >>
> >>
> >> Gluster version used is:
> >>
> >> [root at mdskvm-p01 glusterfs]# rpm -aq|grep -Ei gluster
> >> glusterfs-server-3.12.15-1.el7.x86_64
> >> glusterfs-client-xlators-3.12.15-1.el7.x86_64
> >> glusterfs-rdma-3.12.15-1.el7.x86_64
> >> glusterfs-3.12.15-1.el7.x86_64
> >> glusterfs-events-3.12.15-1.el7.x86_64
> >> libvirt-daemon-driver-storage-gluster-4.5.0-10.el7_6.12.x86_64
> >> glusterfs-libs-3.12.15-1.el7.x86_64
> >> glusterfs-fuse-3.12.15-1.el7.x86_64
> >> glusterfs-geo-replication-3.12.15-1.el7.x86_64
> >> python2-gluster-3.12.15-1.el7.x86_64
More information about the Gluster-users
mailing list