[Gluster-users] Where does Gluster capture the hostnames from?
Strahil
hunter86_bg at yahoo.com
Mon Sep 30 06:06:23 UTC 2019
In replicated volumes you can use either reset-brick or replace-brick.
Still you will have to heal all the data ... Which for large volumes will take a lot of time.
Best Regards,
Strahil NikolovOn Sep 30, 2019 06:51, TomK <tomkcpr at mdevsys.com> wrote:
>
> Because this was a LAB I could quickly remove the gluster setup and
> recreated it and used the FQDN's, it quickly picked up the new names.
> Exactly as expected per this thread.
>
> [root at mdskvm-p02 network-scripts]# gluster volume status
> Status of volume: mdsgv01
> Gluster process TCP Port RDMA Port Online Pid
> ------------------------------------------------------------------------------
> Brick mdskvm-p01.nix.mds.xyz:/mnt/p01-d01/g
> lusterv01 49152 0 Y
> 4375
> Brick mdskvm-p02.nix.mds.xyz:/mnt/p02-d01/g
> lusterv02 49152 0 Y
> 4376
> NFS Server on localhost N/A N/A N N/A
> Self-heal Daemon on localhost N/A N/A Y
> 4402
> NFS Server on mdskvm-p01.nix.mds.xyz N/A N/A N N/A
> Self-heal Daemon on mdskvm-p01.nix.mds.xyz N/A N/A Y
> 4384
>
> Task Status of Volume mdsgv01
> ------------------------------------------------------------------------------
> There are no active volume tasks
>
> [root at mdskvm-p02 network-scripts]#
>
> Would be handy to have a rename function in future releases.
>
> Cheers,
> TK
>
> On 9/25/2019 7:47 AM, TomK wrote:
> > Thanks Thorgeir. Since then I upgraded to Gluster 6. Though this issue
> > remaind the same, anything in the way of new options to change what's
> > displayed?
> >
> > Reason for the ask is that this get's inherited by oVirt when doing
> > discovery of existing gluster volumes. So now I have an IP for a host,
> > a short name for a host and FQDN's for the rest.
> >
> >
> > [root at mdskvm-p02 glusterfs]# gluster volume status
> > Status of volume: mdsgv01
> > Gluster process TCP Port RDMA Port Online
> > Pid
> > ------------------------------------------------------------------------------
> >
> > Brick mdskvm-p02.nix.mds.xyz:/mnt/p02-d01/g
> > lusterv02 49152 0 Y 22368
> > Brick mdskvm-p01.nix.mds.xyz:/mnt/p01-d01/g
> > lusterv01 49152 0 Y 24487
> > NFS Server on localhost N/A N/A N
> > N/A
> > Self-heal Daemon on localhost N/A N/A Y 22406
> > NFS Server on 192.168.0.60 N/A N/A N
> > N/A
> > Self-heal Daemon on 192.168.0.60 N/A N/A Y 25867
> >
> > Task Status of Volume mdsgv01
> > ------------------------------------------------------------------------------
> >
> > There are no active volume tasks
> >
> > [root at mdskvm-p02 glusterfs]#
> >
> > Cheers,
> > TK
> >
> > On 9/24/2019 2:58 AM, Thorgeir Marthinussen wrote:
> >> In an effort to answer the actual question, in my experience the
> >> Gluster internals captures the address the first time you probe
> >> another node.
> >> So if you're logged into the first node and probe the second using an
> >> IP-address, that is what will "forever" be displayed by gluster
> >> status, and if you use a hostname that's what will be shown.
> >> Brick paths are captured when the brick is registered, so using a path
> >> with IP will always show the IP as part of the path, and hostname will
> >> show that, etc.
> >>
> >> I haven't verified, but the second node I believe will attempt a
> >> reverse lookup of the first node (when probing first->second) and
> >> record that name (if any) as the "primary" name of the first node.
> >> Also good to know, nodes can have multiple names, the primary name is
> >> the one "
More information about the Gluster-users
mailing list